BEFORE MRNA AND WUHAN, DARPA FUNDED THE BIRTH OF GOOGLE, FACEBOOK AND THE INTERNET ITSELF

For years, the Pentagon tried to convince the public that they work on your dream secretary. Can you believe that?
Funny how much those plans looked just like today’s Google and Facebook. But it’s not just the looks, it’s also the money, the timeline and the personal connections.
Funnier how the funding scheme was often similar to the one used for Wuhan, with proxy organizations used as middlemen.

WIRED 05.20.2003

A Spy Machine of DARPA’s Dreams

IT’S A MEMORY aid! A robotic assistant! An epidemic detector! An all-seeing, ultra-intrusive spying program!

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable.

What national security experts and civil libertarians want to know is, why would the Defense Department want to do such a thing?

The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read.

All of this — and more — would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health.

This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life,” to see exactly how a relationship or events developed, according to a briefing from the Defense Advanced Projects Research Agency, LifeLog’s sponsor.

Someone with access to the database could “retrieve a specific thread of past transactions, or recall an experience from a few seconds ago or from many years earlier … by using a search-engine interface.”

On the surface, the project seems like the latest in a long line of DARPA’s “blue sky” research efforts, most of which never make it out of the lab. But DARPA is currently asking businesses and universities for research proposals to begin moving LifeLog forward. And some people, such as Steven Aftergood, a defense analyst with the Federation of American Scientists, are worried.News of the future, now.

With its controversial Total Information Awareness database project, DARPA already is planning to track all of an individual’s “transactional data” — like what we buy and who gets our e-mail.

While the parameters of the project have not yet been determined, Aftergood said he believes LifeLog could go far beyond TIA’s scope, adding physical information (like how we feel) and media data (like what we read) to this transactional data.

“LifeLog has the potential to become something like ‘TIA cubed,'” he said.

In the private sector, a number of LifeLog-like efforts already are underway to digitally archive one’s life — to create a “surrogate memory,” as minicomputer pioneer Gordon Bell calls it.

Bell, now with Microsoft, scans all his letters and memos, records his conversations, saves all the Web pages he’s visited and e-mails he’s received and puts them into an electronic storehouse dubbed MyLifeBits.

DARPA’s LifeLog would take this concept several steps further by tracking where people go and what they see.

That makes the project similar to the work of University of Toronto professor Steve Mann. Since his teen years in the 1970s, Mann, a self-styled “cyborg,” has worn a camera and an array of sensors to record his existence. He claims he’s convinced 20 to 30 of his current and former students to do the same. It’s all part of an experiment into “existential technology” and “the metaphysics of free will.”

DARPA isn’t quite so philosophical about LifeLog. But the agency does see some potential battlefield uses for the program.

“The technology could allow the military to develop computerized assistants for war fighters and commanders that can be more effective because they can easily access the user’s past experiences,” DARPA spokeswoman Jan Walker speculated in an e-mail.

It also could allow the military to develop more efficient computerized training systems, she said: Computers could remember how each student learns and interacts with the training system, then tailor the lessons accordingly.

John Pike, director of defense think tank GlobalSecurity.org, said he finds the explanations “hard to believe.”

“It looks like an outgrowth of Total Information Awareness and other DARPA homeland security surveillance programs,” he added in an e-mail.

Sure, LifeLog could be used to train robotic assistants. But it also could become a way to profile suspected terrorists, said Cory Doctorow, with the Electronic Frontier Foundation. In other words, Osama bin Laden’s agent takes a walk around the block at 10 each morning, buys a bagel and a newspaper at the corner store and then calls his mother. You do the same things — so maybe you’re an al Qaeda member, too!

“The more that an individual’s characteristic behavior patterns — ‘routines, relationships and habits’ — can be represented in digital form, the easier it would become to distinguish among different individuals, or to monitor one,” Aftergood, the Federation of American Scientists analyst, wrote in an e-mail.

In its LifeLog report, DARPA makes some nods to privacy protection, like when it suggests that “properly anonymized access to LifeLog data might support medical research and the early detection of an emerging epidemic.”

But before these grand plans get underway, LifeLog will start small. Right now, DARPA is asking industry and academics to submit proposals for 18-month research efforts, with a possible 24-month extension. (DARPA is not sure yet how much money it will sink into the program.)

The researchers will be the centerpiece of their own study.

Like a game show, winning this DARPA prize eventually will earn the lucky scientists a trip for three to Washington, D.C. Except on this excursion, every participating scientist’s e-mail to the travel agent, every padded bar bill and every mad lunge for a cab will be monitored, categorized and later dissected.

WIRED 07.14.2003

Pentagon Alters LifeLog Project

By Noah Shachtman.

Bending a bit to privacy concerns, the Pentagon changes some of the experiments to be conducted for LifeLog, its effort to record every tidbit of information and encounter in daily life. No video recording of unsuspecting people, for example.

MONDAY IS THE deadline for researchers to submit bids to build the Pentagon’s so-called LifeLog project, an experiment to create an all-encompassing über-diary.

But while teams of academics and entrepreneurs are jostling for the 18- to 24-month grants to work on the program, the Defense Department has changed the parameters of the project to respond to a tide of privacy concerns.

Lifelog is the Defense Advanced Research Projects Agency’s effort to gather every conceivable element of a person’s life, dump it all into a database, and spin the information into narrative threads that trace relationships, events and experiences.

It’s an attempt, some say, to make a kind of surrogate, digitized memory.

“My father was a stroke victim, and he lost the ability to record short-term memories,” said Howard Shrobe, an MIT computer scientist who’s leading a team of professors and researchers in a LifeLog bid. “If you ever saw the movie Memento, he had that. So I’m interested in seeing how memory works after seeing a broken one. LifeLog is a chance to do that.”

Researchers who receive LifeLog grants will be required to test the system on themselves. Cameras will record everything they do during a trip to Washington, D.C., and global-positioning satellite locators will track where they go. Biomedical sensors will monitor their health. All the e-mail they send, all the magazines they read, all the credit card payments they make will be indexed and made searchable.

By capturing experiences, Darpa claims that LifeLog could help develop more realistic computerized training programs and robotic assistants for battlefield commanders.

Defense analysts and civil libertarians, on the other hand, worry that the program is another piece in an ongoing Pentagon effort to keep tabs on American citizens. LifeLog could become the ultimate profiling tool, they fear.

A firestorm of criticism ignited after LifeLog first became public in May. Some potential bidders for the LifeLog contract dropped out as a result.

“I’m interested in LifeLog, but I’m going to shy away from it,” said Les Vogel, a computer science researcher in Maui, Hawaii. “Who wants to get in the middle of something that gets that much bad press?”

New York Times columnist William Safire noted that while LifeLog researchers might be comfortable recording their lives, the people that the LifeLoggers are “looking at, listening to, sniffing or conspiring with to blow up the world” might not be so thrilled about turning over some of their private interchanges to the Pentagon.

In response, Darpa changed the LifeLog proposal request. Now: “LifeLog researchers shall not capture imagery or audio of any person without that person’s a priori express permission. In fact, it is desired that capture of imagery or audio of any person other than the user be avoided even if a priori permission is granted.”

Steven Aftergood, with the Federation of American Scientists, sees the alterations as evidence that Darpa proposals must receive a thorough public vetting.

“Darpa doesn’t spontaneously modify their programs in this way,” he said. “It requires public criticism. Give them credit, however, for acknowledging public concerns.”

But not too much, said John Pike, director of GlobalSecurity.org.

“Darpa adds these contractual provisions to appear to be above suspicion,” Pike said. “But if you can put them in, you can take them out.”

WIRED 07.29.2003

Helping Machines Think Different

By Noah Shachtman.

While the Pentagon’s project to record and catalog a person’s life scares privacy advocates, researchers see it as a step in the process of getting computers to think like humans.

TO PENTAGON RESEARCHERS, capturing and categorizing every aspect of a person’s life is only the beginning.

LifeLog — the controversial Defense Department initiative to track everything about an individual — is just one step in a larger effort, according to a top Pentagon research director. Personalized digital assistants that can guess our desires should come first. And then, just maybe, we’ll see computers that can think for themselves.

Computer scientists have dreamed for decades of building machines with minds of their own. But these hopes have been overwhelmed again and again by the messy, dizzying complexities of the real world.

In recent months, the Defense Advanced Research Projects Agency has launched a series of seemingly disparate programs — all designed, the agency says, to help computers deal with the complexities of life, so they finally can begin to think.

“Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks. These systems will need to reason, learn and respond intelligently to things they’ve never encountered before,” said Ron Brachman, the recently installed chief of Darpa’s Information Processing Technology Office, or IPTO. A former senior executive at AT&T Labs, Brachman was elected president of the American Association for Artificial Intelligence last year.

LifeLog is the best-known of these projects. The controversial program intends to record everything about a person — what he sees, where he goes, how he feels — and dump it into a database. Once captured, the information is supposed to be spun into narrative threads that trace relationships, events and experiences.

For years, researchers have been able to get programs to make sense of limited, tightly proscribed situations. Navigating outside of the lab has been much more difficult. Until recently, even getting a robot to walk across the room on its own was a tricky task.

“LifeLog is about forcing computers into the real world,” said leading artificial intelligence researcher Doug Lenat, who’s bidding on the project.

What LifeLog is not, Brachman asserts, is a program to track terrorists. By capturing so much information about an individual, and by combing relationships and traits out of that data, LifeLog appears to some civil libertarians to be an almost limitless tool for profiling potential enemies of the state. Concerns over the Terrorism Information Awareness database effort have only heightened sensitivities.

“These technologies developed by the military have obvious, easy paths to Homeland Security deployments,” said Lee Tien, with the Electronic Frontier Foundation.

Brachman said it is “up to military leaders to decide how to use our technology in support of their mission,” but he repeatedly insisted that IPTO has “absolutely no interest or intention of using any of our technology for profiling.”

What Brachman does want to do is create a computerized assistant that can learn about the habits and wishes of its human boss. And the first step toward this goal is for machines to start seeing, and remembering, life like people do.

Human beings don’t dump their experiences into some formless database or tag them with a couple of keywords. They divide their lives into discreet installments — “college,” “my first date,” “last Thursday.” Researchers call this “episodic memory.”

LifeLog is about trying to install episodic memory into computers, Brachman said. It’s about getting machines to start “remembering experiences in the commonsensical way we do — a vacation in Bermuda, a taxi ride to the airport.”

IPTO recently handed out $29 million in research grants to create a Perceptive Assistant that Learns, or PAL, that can draw on these episodes and improve itself in the process. If people keep missing conferences during rush hour, PAL should learn to schedule meetings when traffic isn’t as thick. If PAL’s boss keeps sending angry notes to spammers, the software secretary eventually should just start flaming on its own.

In the 1980s, artificial intelligence researchers promised to create programs that could do just that. Darpa even promoted a thinking “pilot’s associate — a kind of R2D2,” said Alex Roland, author of The Race for Machine Intelligence: Darpa, DoD, and the Strategic Computing Initiative.

But the field “fell on its face,” according to University of Washington computer scientist Henry Kautz. Instead of trying to teach computers how to reason on their own, “we said, ‘Well, if we just keep adding more rules, we could cover every case imaginable.'”

It’s an impossible task, of course. Every circumstance is different, and there will never be enough to stipulations to cover them all.

A few computer programs, with enough training from their human masters, can make some assumptions about new situations on their own, however. Amazon.com’s system for recommending books and music is one of these.

But these efforts are limited, too. Everyone’s received downright kooky suggestions from that Amazon program.

Overcoming these limitations requires a combination of logical approaches. That’s a goal behind IPTO’s new call for research into computers that can handle real-world reasoning.

It’s one of several problems Brachman said are “absolutely imperative” to solve as quickly as possible.

Although computer systems are getting more complicated every day, this complexity “may be actually reversing the information revolution,” he noted in a recent presentation (PDF). “Systems have grown more rigid, more fragile and increasingly open to attack.”

What’s needed, he asserts, is a computer network that can teach itself new capabilities, without having to be reprogrammed every time. Computers should be able to adapt to how its users like to work, spot when they’re being attacked and develop responses to these assaults. Think of it like the body’s immune system — or like a battlefield general.

But to act more like a person, a computer has to soak up its own experiences, like a human being does. It has to create a catalog of its existence. A LifeLog, if you will.

WIRED 02.04.2004

Pentagon Kills LifeLog Project

THE PENTAGON CANCELED its so-called LifeLog project, an ambitious effort to build a database tracking a person’s entire existence.

Run by Darpa, the Defense Department’s research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, the plane tickets bought, the e-mail sent and received. Out of this seemingly endless ocean of information, computer scientists would plot distinctive routes in the data, mapping relationships, memories, events and experiences.

LifeLog’s backers said the all-encompassing diary could have turned into a near-perfect digital memory, giving its users computerized assistants with an almost flawless recall of what they had done in the past. But civil libertarians immediately pounced on the project when it debuted last spring, arguing that LifeLog could become the ultimate tool for profiling potential enemies of the state.

Researchers close to the project say they’re not sure why it was dropped late last month. Darpa hasn’t provided an explanation for LifeLog’s quiet cancellation. “A change in priorities” is the only rationale agency spokeswoman Jan Walker gave to Wired News.

However, related Darpa efforts concerning software secretaries and mechanical brains are still moving ahead as planned.

LifeLog is the latest in a series of controversial programs that have been canceled by Darpa in recent months. The Terrorism Information Awareness, or TIA, data-mining initiative was eliminated by Congress — although many analysts believe its research continues on the classified side of the Pentagon’s ledger. The Policy Analysis Market (or FutureMap), which provided a stock market of sorts for people to bet on terror strikes, was almost immediately withdrawn after its details came to light in July.

“I’ve always thought (LifeLog) would be the third program (after TIA and FutureMap) that could raise eyebrows if they didn’t make it clear how privacy concerns would be met,” said Peter Harsha, director of government affairs for the Computing Research Association.

“Darpa’s pretty gun-shy now,” added Lee Tien, with the Electronic Frontier Foundation, which has been critical of many agency efforts. “After TIA, they discovered they weren’t ready to deal with the firestorm of criticism.”

That’s too bad, artificial-intelligence researchers say. LifeLog would have addressed one of the key issues in developing computers that can think: how to take the unstructured mess of life, and recall it as discreet episodes — a trip to Washington, a sushi dinner, construction of a house.

“Obviously we’re quite disappointed,” said Howard Shrobe, who led a team from the Massachusetts Institute of Technology Artificial Intelligence Laboratory which spent weeks preparing a bid for a LifeLog contract. “We were very interested in the research focus of the program … how to help a person capture and organize his or her experience. This is a theme with great importance to both AI and cognitive science.”

To Tien, the project’s cancellation means “it’s just not tenable for Darpa to say anymore, ‘We’re just doing the technology, we have no responsibility for how it’s used.'”

Private-sector research in this area is proceeding. At Microsoft, for example, minicomputer pioneer Gordon Bell’s program, MyLifeBits, continues to develop ways to sort and store memories.

David Karger, Shrobe’s colleague at MIT, thinks such efforts will still go on at Darpa, too.

“I am sure that such research will continue to be funded under some other title,” wrote Karger in an e-mail. “I can’t imagine Darpa ‘dropping out’ of such a key research area.”

MEANWHILE…

Google: seeded by the Pentagon

By dr. Nafeez Ahmed

In 1994 — the same year the Highlands Forum was founded under the stewardship of the Office of the Secretary of Defense, the ONA, and DARPA — two young PhD students at Stanford University, Sergey Brin and Larry Page, made their breakthrough on the first automated web crawling and page ranking application. That application remains the core component of what eventually became Google’s search service. Brin and Page had performed their work with funding from the Digital Library Initiative (DLI), a multi-agency programme of the National Science Foundation (NSF), NASA and DARPA.

But that’s just one side of the story.

Min 6:44!


Also check: OBAMA, DARPA, GSK AND ROCKEFELLER’S $4.5B B.R.A.I.N. INITIATIVE – BETTER SIT WHEN YOU READ

Throughout the development of the search engine, Sergey Brin reported regularly and directly to two people who were not Stanford faculty at all: Dr. Bhavani Thuraisingham and Dr. Rick Steinheiser. Both were representatives of a sensitive US intelligence community research programme on information security and data-mining.

Thuraisingham is currently the Louis A. Beecherl distinguished professor and executive director of the Cyber Security Research Institute at the University of Texas, Dallas, and a sought-after expert on data-mining, data management and information security issues. But in the 1990s, she worked for the MITRE Corp., a leading US defense contractor, where she managed the Massive Digital Data Systems initiative, a project sponsored by the NSA, CIA, and the Director of Central Intelligence, to foster innovative research in information technology.

“We funded Stanford University through the computer scientist Jeffrey Ullman, who had several promising graduate students working on many exciting areas,” Prof. Thuraisingham told me. “One of them was Sergey Brin, the founder of Google. The intelligence community’s MDDS program essentially provided Brin seed-funding, which was supplemented by many other sources, including the private sector.”

This sort of funding is certainly not unusual, and Sergey Brin’s being able to receive it by being a graduate student at Stanford appears to have been incidental. The Pentagon was all over computer science research at this time. But it illustrates how deeply entrenched the culture of Silicon Valley is in the values of the US intelligence community.

In an extraordinary document hosted by the website of the University of Texas, Thuraisingham recounts that from 1993 to 1999, “the Intelligence Community [IC] started a program called Massive Digital Data Systems (MDDS) that I was managing for the Intelligence Community when I was at the MITRE Corporation.” The program funded 15 research efforts at various universities, including Stanford. Its goal was developing “data management technologies to manage several terabytes to petabytes of data,” including for “query processing, transaction management, metadata management, storage management, and data integration.”

At the time, Thuraisingham was chief scientist for data and information management at MITRE, where she led team research and development efforts for the NSA, CIA, US Air Force Research Laboratory, as well as the US Navy’s Space and Naval Warfare Systems Command (SPAWAR) and Communications and Electronic Command (CECOM). She went on to teach courses for US government officials and defense contractors on data-mining in counter-terrorism.

In her University of Texas article, she attaches the copy of an abstract of the US intelligence community’s MDDS program that had been presented to the “Annual Intelligence Community Symposium” in 1995. The abstract reveals that the primary sponsors of the MDDS programme were three agencies: the NSA, the CIA’s Office of Research & Development, and the intelligence community’s Community Management Staff (CMS) which operates under the Director of Central Intelligence. Administrators of the program, which provided funding of around 3–4 million dollars per year for 3–4 years, were identified as Hal Curran (NSA), Robert Kluttz (CMS), Dr. Claudia Pierce (NSA), Dr. Rick Steinheiser (ORD — standing for the CIA’s Office of Research and Devepment), and Dr. Thuraisingham herself.

Thuraisingham goes on in her article to reiterate that this joint CIA-NSA program partly funded Sergey Brin to develop the core of Google, through a grant to Stanford managed by Brin’s supervisor Prof. Jeffrey D. Ullman:

“In fact, the Google founder Mr. Sergey Brin was partly funded by this program while he was a PhD student at Stanford. He together with his advisor Prof. Jeffrey Ullman and my colleague at MITRE, Dr. Chris Clifton [Mitre’s chief scientist in IT], developed the Query Flocks System which produced solutions for mining large amounts of data stored in databases. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community and Mr. Brin would rush in on roller blades, give his presentation and rush out. In fact the last time we met in September 1998, Mr. Brin demonstrated to us his search engine which became Google soon after.”

Brin and Page officially incorporated Google as a company in September 1998, the very month they last reported to Thuraisingham and Steinheiser. ‘Query Flocks’ was also part of Google’s patented ‘PageRank’ search system, which Brin developed at Stanford under the CIA-NSA-MDDS programme, as well as with funding from the NSF, IBM and Hitachi. That year, MITRE’s Dr. Chris Clifton, who worked under Thuraisingham to develop the ‘Query Flocks’ system, co-authored a paper with Brin’s superviser, Prof. Ullman, and the CIA’s Rick Steinheiser. Titled ‘Knowledge Discovery in Text,’ the paper was presented at an academic conference.

“The MDDS funding that supported Brin was significant as far as seed-funding goes, but it was probably outweighed by the other funding streams,” said Thuraisingham. “The duration of Brin’s funding was around two years or so. In that period, I and my colleagues from the MDDS would visit Stanford to see Brin and monitor his progress every three months or so. We didn’t supervise exactly, but we did want to check progress, point out potential problems and suggest ideas. In those briefings, Brin did present to us on the query flocks research, and also demonstrated to us versions of the Google search engine.”

Brin thus reported to Thuraisingham and Steinheiser regularly about his work developing Google.

==

UPDATE 2.05PM GMT [2nd Feb 2015]:

Since publication of this article, Prof. Thuraisingham has amended her article referenced above. The amended version includes a new modified statement, followed by a copy of the original version of her account of the MDDS. In this amended version, Thuraisingham rejects the idea that CIA funded Google, and says instead:

“In fact Prof. Jeffrey Ullman (at Stanford) and my colleague at MITRE Dr. Chris Clifton together with some others developed the Query Flocks System, as part of MDDS, which produced solutions for mining large amounts of data stored in databases. Also, Mr. Sergey Brin, the cofounder of Google, was part of Prof. Ullman’s research group at that time. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community periodically and Mr. Brin would rush in on roller blades, give his presentation and rush out. During our last visit to Stanford in September 1998, Mr. Brin demonstrated to us his search engine which I believe became Google soon after…

There are also several inaccuracies in Dr. Ahmed’s article (dated January 22, 2015). For example, the MDDS program was not a ‘sensitive’ program as stated by Dr. Ahmed; it was an Unclassified program that funded universities in the US. Furthermore, Sergey Brin never reported to me or to Dr. Rick Steinheiser; he only gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s. Also, MDDS never funded Google; it funded Stanford University.”

Here, there is no substantive factual difference in Thuraisingham’s accounts, other than to assert that her statement associating Sergey Brin with the development of ‘query flocks’ is mistaken. Notably, this acknowledgement is derived not from her own knowledge, but from this very article quoting a comment from a Google spokesperson.

However, the bizarre attempt to disassociate Google from the MDDS program misses the mark. Firstly, the MDDS never funded Google, because during the development of the core components of the Google search engine, there was no company incorporated with that name. The grant was instead provided to Stanford University through Prof. Ullman, through whom some MDDS funding was used to support Brin who was co-developing Google at the time. Secondly, Thuraisingham then adds that Brin never “reported” to her or the CIA’s Steinheiser, but admits he “gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s.” It is unclear, though, what the distinction is here between reporting, and delivering a detailed presentation — either way, Thuraisingham confirms that she and the CIA had taken a keen interest in Brin’s development of Google. Thirdly, Thuraisingham describes the MDDS program as “unclassified,” but this does not contradict its “sensitive” nature. As someone who has worked for decades as an intelligence contractor and advisor, Thuraisingham is surely aware that there are many ways of categorizing intelligence, including ‘sensitive but unclassified.’ A number of former US intelligence officials I spoke to said that the almost total lack of public information on the CIA and NSA’s MDDS initiative suggests that although the progam was not classified, it is likely instead that its contents was considered sensitive, which would explain efforts to minimise transparency about the program and the way it fed back into developing tools for the US intelligence community. Fourthly, and finally, it is important to point out that the MDDS abstract which Thuraisingham includes in her University of Texas document states clearly not only that the Director of Central Intelligence’s CMS, CIA and NSA were the overseers of the MDDS initiative, but that the intended customers of the project were “DoD, IC, and other government organizations”: the Pentagon, the US intelligence community, and other relevant US government agencies.

In other words, the provision of MDDS funding to Brin through Ullman, under the oversight of Thuraisingham and Steinheiser, was fundamentally because they recognized the potential utility of Brin’s work developing Google to the Pentagon, intelligence community, and the federal government at large.

==

The MDDS programme is actually referenced in several papers co-authored by Brin and Page while at Stanford, specifically highlighting its role in financially sponsoring Brin in the development of Google. In their 1998 paper published in the Bulletin of the IEEE Computer Society Technical Committeee on Data Engineering, they describe the automation of methods to extract information from the web via “Dual Iterative Pattern Relation Extraction,” the development of “a global ranking of Web pages called PageRank,” and the use of PageRank “to develop a novel search engine called Google.” Through an opening footnote, Sergey Brin confirms he was “Partially supported by the Community Management Staff’s Massive Digital Data Systems Program, NSF grant IRI-96–31952” — confirming that Brin’s work developing Google was indeed partly-funded by the CIA-NSA-MDDS program.

This NSF grant identified alongside the MDDS, whose project report lists Brin among the students supported (without mentioning the MDDS), was different to the NSF grant to Larry Page that included funding from DARPA and NASA. The project report, authored by Brin’s supervisor Prof. Ullman, goes on to say under the section ‘Indications of Success’ that “there are some new stories of startups based on NSF-supported research.” Under ‘Project Impact,’ the report remarks: “Finally, the google project has also gone commercial as Google.com.”

Thuraisingham’s account, including her new amended version, therefore demonstrates that the CIA-NSA-MDDS program was not only partly funding Brin throughout his work with Larry Page developing Google, but that senior US intelligence representatives including a CIA official oversaw the evolution of Google in this pre-launch phase, all the way until the company was ready to be officially founded. Google, then, had been enabled with a “significant” amount of seed-funding and oversight from the Pentagon: namely, the CIA, NSA, and DARPA.

The DoD could not be reached for comment.

When I asked Prof. Ullman to confirm whether or not Brin was partly funded under the intelligence community’s MDDS program, and whether Ullman was aware that Brin was regularly briefing the CIA’s Rick Steinheiser on his progress in developing the Google search engine, Ullman’s responses were evasive: “May I know whom you represent and why you are interested in these issues? Who are your ‘sources’?” He also denied that Brin played a significant role in developing the ‘query flocks’ system, although it is clear from Brin’s papers that he did draw on that work in co-developing the PageRank system with Page.

When I asked Ullman whether he was denying the US intelligence community’s role in supporting Brin during the development of Google, he said: “I am not going to dignify this nonsense with a denial. If you won’t explain what your theory is, and what point you are trying to make, I am not going to help you in the slightest.”

The MDDS abstract published online at the University of Texas confirms that the rationale for the CIA-NSA project was to “provide seed money to develop data management technologies which are of high-risk and high-pay-off,” including techniques for “querying, browsing, and filtering; transaction processing; accesses methods and indexing; metadata management and data modelling; and integrating heterogeneous databases; as well as developing appropriate architectures.” The ultimate vision of the program was to “provide for the seamless access and fusion of massive amounts of data, information and knowledge in a heterogeneous, real-time environment” for use by the Pentagon, intelligence community and potentially across government.

These revelations corroborate the claims of Robert Steele, former senior CIA officer and a founding civilian deputy director of the Marine Corps Intelligence Activity, whom I interviewed for The Guardian last year on open source intelligence. Citing sources at the CIA, Steele had said in 2006 that Steinheiser, an old colleague of his, was the CIA’s main liaison at Google and had arranged early funding for the pioneering IT firm. At the time, Wired founder John Batelle managed to get this official denial from a Google spokesperson in response to Steele’s assertions:

“The statements related to Google are completely untrue.”

This time round, despite multiple requests and conversations, a Google spokesperson declined to comment.

UPDATE: As of 5.41PM GMT [22nd Jan 2015], Google’s director of corporate communication got in touch and asked me to include the following statement:

“Sergey Brin was not part of the Query Flocks Program at Stanford, nor were any of his projects funded by US Intelligence bodies.”

This is what I wrote back:

My response to that statement would be as follows: Brin himself in his own paper acknowledges funding from the Community Management Staff of the Massive Digital Data Systems (MDDS) initiative, which was supplied through the NSF. The MDDS was an intelligence community program set up by the CIA and NSA. I also have it on record, as noted in the piece, from Prof. Thuraisingham of University of Texas that she managed the MDDS program on behalf of the US intelligence community, and that her and the CIA’s Rick Steinheiser met Brin every three months or so for two years to be briefed on his progress developing Google and PageRank. Whether Brin worked on query flocks or not is neither here nor there.

In that context, you might want to consider the following questions:

1) Does Google deny that Brin’s work was part-funded by the MDDS via an NSF grant?

2) Does Google deny that Brin reported regularly to Thuraisingham and Steinheiser from around 1996 to 1998 until September that year when he presented the Google search engine to them?

LESS KNOWN FACT: AROUND THE SAME YEAR 2004, SERGEY BRIN JOINED WORLD ECONOMIC FORUM’S YOUTH ORGANIZATION, THE “YOUNG GLOBAL LEADERS”

Total Information Awareness

A call for papers for the MDDS was sent out via email list on November 3rd 1993 from senior US intelligence official David Charvonia, director of the research and development coordination office of the intelligence community’s CMS. The reaction from Tatu Ylonen (celebrated inventor of the widely used secure shell [SSH] data protection protocol) to his colleagues on the email list is telling: “Crypto relevance? Makes you think whether you should protect your data.” The email also confirms that defense contractor and Highlands Forum partner, SAIC, was managing the MDDS submission process, with abstracts to be sent to Jackie Booth of the CIA’s Office of Research and Development via a SAIC email address.

By 1997, Thuraisingham reveals, shortly before Google became incorporated and while she was still overseeing the development of its search engine software at Stanford, her thoughts turned to the national security applications of the MDDS program. In the acknowledgements to her book, Web Data Mining and Applications in Business Intelligence and Counter-Terrorism (2003), Thuraisingham writes that she and “Dr. Rick Steinheiser of the CIA, began discussions with Defense Advanced Research Projects Agency on applying data-mining for counter-terrorism,” an idea that resulted directly from the MDDS program which partly funded Google. “These discussions eventually developed into the current EELD (Evidence Extraction and Link Detection) program at DARPA.”

So the very same senior CIA official and CIA-NSA contractor involved in providing the seed-funding for Google were simultaneously contemplating the role of data-mining for counter-terrorism purposes, and were developing ideas for tools actually advanced by DARPA.

Today, as illustrated by her recent oped in the New York Times, Thuraisingham remains a staunch advocate of data-mining for counter-terrorism purposes, but also insists that these methods must be developed by government in cooperation with civil liberties lawyers and privacy advocates to ensure that robust procedures are in place to prevent potential abuse. She points out, damningly, that with the quantity of information being collected, there is a high risk of false positives.

In 1993, when the MDDS program was launched and managed by MITRE Corp. on behalf of the US intelligence community, University of Virginia computer scientist Dr. Anita K. Jones — a MITRE trustee — landed the job of DARPA director and head of research and engineering across the Pentagon. She had been on the board of MITRE since 1988. From 1987 to 1993, Jones simultaneously served on SAIC’s board of directors. As the new head of DARPA from 1993 to 1997, she also co-chaired the Pentagon’s Highlands Forum during the period of Google’s pre-launch development at Stanford under the MDSS.

Thus, when Thuraisingham and Steinheiser were talking to DARPA about the counter-terrorism applications of MDDS research, Jones was DARPA director and Highlands Forum co-chair. That year, Jones left DARPA to return to her post at the University of Virgina. The following year, she joined the board of the National Science Foundation, which of course had also just funded Brin and Page, and also returned to the board of SAIC. When she left DoD, Senator Chuck Robb paid Jones the following tribute : “She brought the technology and operational military communities together to design detailed plans to sustain US dominance on the battlefield into the next century.”

Dr. Anita Jones, head of DARPA from 1993–1997, and co-chair of the Pentagon Highlands Forum from 1995–1997, during which officials in charge of the CIA-NSA-MDSS program were funding Google, and in communication with DARPA about data-mining for counterterrorism

On the board of the National Science Foundation from 1992 to 1998 (including a stint as chairman from 1996) was Richard N. Zare. This was the period in which the NSF sponsored Sergey Brin and Larry Page in association with DARPA. In June 1994, Prof. Zare, a chemist at Stanford, participated with Prof. Jeffrey Ullman (who supervised Sergey Brin’s research), on a panel sponsored by Stanford and the National Research Council discussing the need for scientists to show how their work “ties to national needs.” The panel brought together scientists and policymakers, including “Washington insiders.”

DARPA’s EELD program, inspired by the work of Thuraisingham and Steinheiser under Jones’ watch, was rapidly adapted and integrated with a suite of tools to conduct comprehensive surveillance under the Bush administration.

According to DARPA official Ted Senator, who led the EELD program for the agency’s short-lived Information Awareness Office, EELD was among a range of “promising techniques” being prepared for integration “into the prototype TIA system.” TIA stood for Total Information Awareness, and was the main global electronic eavesdropping and data-mining program deployed by the Bush administration after 9/11. TIA had been set up by Iran-Contra conspirator Admiral John Poindexter, who was appointed in 2002 by Bush to lead DARPA’s new Information Awareness Office.

The Xerox Palo Alto Research Center (PARC) was another contractor among 26 companies (also including SAIC) that received million dollar contracts from DARPA (the specific quantities remained classified) under Poindexter, to push forward the TIA surveillance program in 2002 onwards. The research included “behaviour-based profiling,” “automated detection, identification and tracking” of terrorist activity, among other data-analyzing projects. At this time, PARC’s director and chief scientist was John Seely Brown. Both Brown and Poindexter were Pentagon Highlands Forum participants — Brown on a regular basis until recently.

TIA was purportedly shut down in 2003 due to public opposition after the program was exposed in the media, but the following year Poindexter participated in a Pentagon Highlands Group session in Singapore, alongside defense and security officials from around the world. Meanwhile, Ted Senator continued to manage the EELD program among other data-mining and analysis projects at DARPA until 2006, when he left to become a vice president at SAIC. He is now a SAIC/Leidos technical fellow.

Google, DARPA and the money trail

Long before the appearance of Sergey Brin and Larry Page, Stanford University’s computer science department had a close working relationship with US military intelligence. A letter dated November 5th 1984 from the office of renowned artificial intelligence (AI) expert, Prof Edward Feigenbaum, addressed to Rick Steinheiser, gives the latter directions to Stanford’s Heuristic Programming Project, addressing Steinheiser as a member of the “AI Steering Committee.” A list of attendees at a contractor conference around that time, sponsored by the Pentagon’s Office of Naval Research (ONR), includes Steinheiser as a delegate under the designation “OPNAV Op-115” — which refers to the Office of the Chief of Naval Operations’ program on operational readiness, which played a major role in advancing digital systems for the military.

From the 1970s, Prof. Feigenbaum and his colleagues had been running Stanford’s Heuristic Programming Project under contract with DARPA, continuing through to the 1990s. Feigenbaum alone had received around over $7 million in this period for his work from DARPA, along with other funding from the NSF, NASA, and ONR.

Brin’s supervisor at Stanford, Prof. Jeffrey Ullman, was in 1996 part of a joint funding project of DARPA’s Intelligent Integration of Information program. That year, Ullman co-chaired DARPA-sponsored meetings on data exchange between multiple systems.

In September 1998, the same month that Sergey Brin briefed US intelligence representatives Steinheiser and Thuraisingham, tech entrepreneurs Andreas Bechtolsheim and David Cheriton invested $100,000 each in Google. Both investors were connected to DARPA.

As a Stanford PhD student in electrical engineering in the 1980s, Bechtolsheim’s pioneering SUN workstation project had been funded by DARPA and the Stanford computer science department — this research was the foundation of Bechtolsheim’s establishment of Sun Microsystems, which he co-founded with William Joy.

As for Bechtolsheim’s co-investor in Google, David Cheriton, the latter is a long-time Stanford computer science professor who has an even more entrenched relationship with DARPA. His bio at the University of Alberta, which in November 2014 awarded him an honorary science doctorate, says that Cheriton’s “research has received the support of the US Defense Advanced Research Projects Agency (DARPA) for over 20 years.”

In the meantime, Bechtolsheim left Sun Microsystems in 1995, co-founding Granite Systems with his fellow Google investor Cheriton as a partner. They sold Granite to Cisco Systems in 1996, retaining significant ownership of Granite, and becoming senior Cisco executives.

An email obtained from the Enron Corpus (a database of 600,000 emails acquired by the Federal Energy Regulatory Commission and later released to the public) from Richard O’Neill, inviting Enron executives to participate in the Highlands Forum, shows that Cisco and Granite executives are intimately connected to the Pentagon. The email reveals that in May 2000, Bechtolsheim’s partner and Sun Microsystems co-founder, William Joy — who was then chief scientist and corporate executive officer there — had attended the Forum to discuss nanotechnology and molecular computing.

In 1999, Joy had also co-chaired the President’s Information Technology Advisory Committee, overseeing a report acknowledging that DARPA had:

“… revised its priorities in the 90’s so that all information technology funding was judged in terms of its benefit to the warfighter.”

Throughout the 1990s, then, DARPA’s funding to Stanford, including Google, was explicitly about developing technologies that could augment the Pentagon’s military intelligence operations in war theatres.

The Joy report recommended more federal government funding from the Pentagon, NASA, and other agencies to the IT sector. Greg Papadopoulos, another of Bechtolsheim’s colleagues as then Sun Microsystems chief technology officer, also attended a Pentagon Highlands’ Forum meeting in September 2000.

In November, the Pentagon Highlands Forum hosted Sue Bostrom, who was vice president for the internet at Cisco, sitting on the company’s board alongside Google co-investors Bechtolsheim and Cheriton. The Forum also hosted Lawrence Zuriff, then a managing partner of Granite, which Bechtolsheim and Cheriton had sold to Cisco. Zuriff had previously been an SAIC contractor from 1993 to 1994, working with the Pentagon on national security issues, specifically for Marshall’s Office of Net Assessment. In 1994, both the SAIC and the ONA were, of course, involved in co-establishing the Pentagon Highlands Forum. Among Zuriff’s output during his SAIC tenure was a paper titled ‘Understanding Information War’, delivered at a SAIC-sponsored US Army Roundtable on the Revolution in Military Affairs.

After Google’s incorporation, the company received $25 million in equity funding in 1999 led by Sequoia Capital and Kleiner Perkins Caufield & Byers. According to Homeland Security Today, “A number of Sequoia-bankrolled start-ups have contracted with the Department of Defense, especially after 9/11 when Sequoia’s Mark Kvamme met with Defense Secretary Donald Rumsfeld to discuss the application of emerging technologies to warfighting and intelligence collection.” Similarly, Kleiner Perkins had developed “a close relationship” with In-Q-Tel, the CIA venture capitalist firm that funds start-ups “to advance ‘priority’ technologies of value” to the intelligence community.

John Doerr, who led the Kleiner Perkins investment in Google obtaining a board position, was a major early investor in Becholshtein’s Sun Microsystems at its launch. He and his wife Anne are the main funders behind Rice University’s Center for Engineering Leadership (RCEL), which in 2009 received $16 million from DARPA for its platform-aware-compilation-environment (PACE) ubiquitous computing R&D program. Doerr also has a close relationship with the Obama administration, which he advised shortly after it took power to ramp up Pentagon funding to the tech industry. In 2013, at the Fortune Brainstorm TECH conference, Doerr applauded “how the DoD’s DARPA funded GPS, CAD, most of the major computer science departments, and of course, the Internet.”

From inception, in other words, Google was incubated, nurtured and financed by interests that were directly affiliated or closely aligned with the US military intelligence community: many of whom were embedded in the Pentagon Highlands Forum.

Google captures the Pentagon

In 2003, Google began customizing its search engine under special contract with the CIA for its Intelink Management Office, “overseeing top-secret, secret and sensitive but unclassified intranets for CIA and other IC agencies,” according to Homeland Security Today. That year, CIA funding was also being “quietly” funneled through the National Science Foundation to projects that might help create “new capabilities to combat terrorism through advanced technology.”

The following year, Google bought the firm Keyhole, which had originally been funded by In-Q-Tel. Using Keyhole, Google began developing the advanced satellite mapping software behind Google Earth. Former DARPA director and Highlands Forum co-chair Anita Jones had been on the board of In-Q-Tel at this time, and remains so today.

Then in November 2005, In-Q-Tel issued notices to sell $2.2 million of Google stocks. Google’s relationship with US intelligence was further brought to light when an IT contractor told a closed Washington DC conference of intelligence professionals on a not-for-attribution basis that at least one US intelligence agency was working to “leverage Google’s [user] data monitoring” capability as part of an effort to acquire data of “national security intelligence interest.”

photo on Flickr dated March 2007 reveals that Google research director and AI expert Peter Norvig attended a Pentagon Highlands Forum meeting that year in Carmel, California. Norvig’s intimate connection to the Forum as of that year is also corroborated by his role in guest editing the 2007 Forum reading list.

The photo below shows Norvig in conversation with Lewis Shepherd, who at that time was senior technology officer at the Defense Intelligence Agency, responsible for investigating, approving, and architecting “all new hardware/software systems and acquisitions for the Global Defense Intelligence IT Enterprise,” including “big data technologies.” Shepherd now works at Microsoft. Norvig was a computer research scientist at Stanford University in 1991 before joining Bechtolsheim’s Sun Microsystems as senior scientist until 1994, and going on to head up NASA’s computer science division.

Lewis Shepherd (left), then a senior technology officer at the Pentagon’s Defense Intelligence Agency, talking to Peter Norvig (right), renowned expert in artificial intelligence expert and director of research at Google. This photo is from a Highlands Forum meeting in 2007.

Norvig shows up on O’Neill’s Google Plus profile as one of his close connections. Scoping the rest of O’Neill’s Google Plus connections illustrates that he is directly connected not just to a wide range of Google executives, but also to some of the biggest names in the US tech community.

Those connections include Michele Weslander Quaid, an ex-CIA contractor and former senior Pentagon intelligence official who is now Google’s chief technology officer where she is developing programs to “best fit government agencies’ needs”; Elizabeth Churchill, Google director of user experience; James Kuffner, a humanoid robotics expert who now heads up Google’s robotics division and who introduced the term ‘cloud robotics’; Mark Drapeau, director of innovation engagement for Microsoft’s public sector business; Lili Cheng, general manager of Microsoft’s Future Social Experiences (FUSE) Labs; Jon Udell, Microsoft ‘evangelist’; Cory Ondrejka, vice president of engineering at Facebook; to name just a few.

In 2010, Google signed a multi-billion dollar no-bid contract with the NSA’s sister agency, the National Geospatial-Intelligence Agency (NGA). The contract was to use Google Earth for visualization services for the NGA. Google had developed the software behind Google Earth by purchasing Keyhole from the CIA venture firm In-Q-Tel.

Then a year after, in 2011, another of O’Neill’s Google Plus connections, Michele Quaid — who had served in executive positions at the NGA, National Reconnaissance Office and the Office of the Director of National Intelligence — left her government role to become Google ‘innovation evangelist’ and the point-person for seeking government contracts. Quaid’s last role before her move to Google was as a senior representative of the Director of National Intelligence to the Intelligence, Surveillance, and Reconnaissance Task Force, and a senior advisor to the undersecretary of defense for intelligence’s director of Joint and Coalition Warfighter Support (J&CWS). Both roles involved information operations at their core. Before her Google move, in other words, Quaid worked closely with the Office of the Undersecretary of Defense for Intelligence, to which the Pentagon’s Highlands Forum is subordinate. Quaid has herself attended the Forum, though precisely when and how often I could not confirm.

In March 2012, then DARPA director Regina Dugan — who in that capacity was also co-chair of the Pentagon Highlands Forum — followed her colleague Quaid into Google to lead the company’s new Advanced Technology and Projects Group. During her Pentagon tenure, Dugan led on strategic cyber security and social media, among other initiatives. She was responsible for focusing “an increasing portion” of DARPA’s work “on the investigation of offensive capabilities to address military-specific needs,” securing $500 million of government funding for DARPA cyber research from 2012 to 2017.

Regina Dugan, former head of DARPA and Highlands Forum co-chair, now a senior Google executive — trying her best to look the part

By November 2014, Google’s chief AI and robotics expert James Kuffner was a delegate alongside O’Neill at the Highlands Island Forum 2014 in Singapore, to explore ‘Advancement in Robotics and Artificial Intelligence: Implications for Society, Security and Conflict.’ The event included 26 delegates from Austria, Israel, Japan, Singapore, Sweden, Britain and the US, from both industry and government. Kuffner’s association with the Pentagon, however, began much earlier. In 1997, Kuffner was a researcher during his Stanford PhD for a Pentagon-funded project on networked autonomous mobile robots, sponsored by DARPA and the US Navy.

Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is also a columnist for Middle East Eye. He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work.

Nafeez has also written for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, Counterpunch, Truthout, among others. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel ZERO POINT, among other books. His work on the root causes and covert operations linked to international terrorism officially contributed to the 9/11 Commission and the 7/7 Coroner’s Inquest.

Nafeez is 120% corroborated by Quartz:

A rich history of the governments science funding

There was already a long history of collaboration between America’s best scientists and the intelligence community, from the creation of the atomic bomb and satellite technology to efforts to put a man on the moon.The internet itself was created because of an intelligence effort.

In fact, the internet itself was created because of an intelligence effort: In the 1970s, the agency responsible for developing emerging technologies for military, intelligence, and national security purposes—the Defense Advanced Research Projects Agency (DARPA)—linked four supercomputers to handle massive data transfers. It handed the operations off to the National Science Foundation (NSF) a decade or so later, which proliferated the network across thousands of universities and, eventually, the public, thus creating the architecture and scaffolding of the World Wide Web.

Silicon Valley was no different. By the mid 1990s, the intelligence community was seeding funding to the most promising supercomputing efforts across academia, guiding the creation of efforts to make massive amounts of information useful for both the private sector as well as the intelligence community.

They funded these computer scientists through an unclassified, highly compartmentalized program that was managed for the CIA and the NSA by large military and intelligence contractors. It was called the Massive Digital Data Systems (MDDS) project.

The Massive Digital Data Systems (MDDS) project 

MDDS was introduced to several dozen leading computer scientists at Stanford, CalTech, MIT, Carnegie Mellon, Harvard, and others in a white paper that described what the CIA, NSA, DARPA, and other agencies hoped to achieve. The research would largely be funded and managed by unclassified science agencies like NSF, which would allow the architecture to be scaled up in the private sector if it managed to achieve what the intelligence community hoped for.

“Not only are activities becoming more complex, but changing demands require that the IC [Intelligence Community] process different types as well as larger volumes of data,” the intelligence community said in its 1993 MDDS white paper. “Consequently, the IC is taking a proactive role in stimulating research in the efficient management of massive databases and ensuring that IC requirements can be incorporated or adapted into commercial products. Because the challenges are not unique to any one agency, the Community Management Staff (CMS) has commissioned a Massive Digital Data Systems [MDDS] Working Group to address the needs and to identify and evaluate possible solutions.”

Over the next few years, the program’s stated aim was to provide more than a dozen grants of several million dollars each to advance this research concept. The grants were to be directed largely through the NSF so that the most promising, successful efforts could be captured as intellectual property and form the basis of companies attracting investments from Silicon Valley. This type of public-to-private innovation system helped launch powerful science and technology companies like Qualcomm, Symantec, Netscape, and others, and funded the pivotal research in areas like Doppler radar and fiber optics, which are central to large companies like AccuWeather, Verizon, and AT&T today. Today, the NSF provides nearly 90% of all federal funding for university-based computer-science research.

MIT is but a Pentagon lab

The CIA and NSAs end goal

The research arms of the CIA and NSA hoped that the best computer-science minds in academia could identify what they called “birds of a feather:” Just as geese fly together in large V shapes, or flocks of sparrows make sudden movements together in harmony, they predicted that like-minded groups of humans would move together online. The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing, and the “Birds of a Feather Session on the Intelligence Community Initiative in Massive Digital Data Systems” took place at the Fairmont Hotel in San Jose in the spring of 1995.The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing.

Their research aim was to track digital fingerprints inside the rapidly expanding global information network, which was then known as the World Wide Web. Could an entire world of digital information be organized so that the requests humans made inside such a network be tracked and sorted? Could their queries be linked and ranked in order of importance? Could “birds of a feather” be identified inside this sea of information so that communities and groups could be tracked in an organized way?

By working with emerging commercial-data companies, their intent was to track like-minded groups of people across the internet and identify them from the digital fingerprints they left behind, much like forensic scientists use fingerprint smudges to identify criminals. Just as “birds of a feather flock together,” they predicted that potential terrorists would communicate with each other in this new global, connected world—and they could find them by identifying patterns in this massive amount of new information. Once these groups were identified, they could then follow their digital trails everywhere.

Sergey Brin and Larry Page, computer-science boy wonders 

In 1995, one of the first and most promising MDDS grants went to a computer-science research team at Stanford University with a decade-long history of working with NSF and DARPA grants. The primary objective of this grant was “query optimization of very complex queries that are described using the ‘query flocks’ approach.” A second grant—the DARPA-NSF grant most closely associated with Google’s origin—was part of a coordinated effort to build a massive digital library using the internet as its backbone. Both grants funded research by two graduate students who were making rapid advances in web-page ranking, as well as tracking (and making sense of) user queries: future Google cofounders Sergey Brin and Larry Page.

The research by Brin and Page under these grants became the heart of Google: people using search functions to find precisely what they wanted inside a very large data set. The intelligence community, however, saw a slightly different benefit in their research: Could the network be organized so efficiently that individual users could be uniquely identified and tracked?

This process is perfectly suited for the purposes of counter-terrorism and homeland security efforts: Human beings and like-minded groups who might pose a threat to national security can be uniquely identified online before they do harm. This explains why the intelligence community found Brin’s and Page’s research efforts so appealing; prior to this time, the CIA largely used human intelligence efforts in the field to identify people and groups that might pose threats. The ability to track them virtually (in conjunction with efforts in the field) would change everything.

It was the beginning of what in just a few years’ time would become Google. The two intelligence-community managers charged with leading the program met regularly with Brin as his research progressed, and he was an author on several other research papers that resulted from this MDDS grant before he and Page left to form Google.

The grants allowed Brin and Page to do their work and contributed to their breakthroughs in web-page ranking and tracking user queries. Brin didn’t work for the intelligence community—or for anyone else. Google had not yet been incorporated. He was just a Stanford researcher taking advantage of the grant provided by the NSA and CIA through the unclassified MDDS program.

Left out of Googles story

The MDDS research effort has never been part of Google’s origin story, even though the principal investigator for the MDDS grant specifically named Google as directly resulting from their research: “Its core technology, which allows it to find pages far more accurately than other search engines, was partially supported by this grant,” he wrote. In a published research paper that includes some of Brin’s pivotal work, the authors also reference the NSF grant that was created by the MDDS program.

Instead, every Google creation story only mentions just one federal grant: the NSF/DARPA “digital libraries” grant, which was designed to allow Stanford researchers to search the entire World Wide Web stored on the university’s servers at the time. “The development of the Google algorithms was carried on a variety of computers, mainly provided by the NSF-DARPA-NASA-funded Digital Library project at Stanford,” Stanford’s Infolab says of its origin, for example. NSF likewise only references the digital libraries grant, not the MDDS grant as well, in its own history of Google’s origin. In the famous research paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” which describes the creation of Google, Brin and Page thanked the NSF and DARPA for its digital library grant to Stanford. But the grant from the intelligence community’s MDDS program—specifically designed for the breakthrough that Google was built upon—has faded into obscurity.

Google has said in the past that it was not funded or created by the CIA. For instance, when stories circulated in 2006 that Google had received funding from the intelligence community for years to assist in counter-terrorism efforts, the company told Wired magazine founder John Battelle, “The statements related to Google are completely untrue.”

Did the CIA directly fund the work of Brin and Page, and therefore create Google? No. But were Brin and Page researching precisely what the NSA, the CIA, and the intelligence community hoped for, assisted by their grants? Absolutely.The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur something that looks almost exactly like Google.

To understand this significance, you have to consider what the intelligence community was trying to achieve as it seeded grants to the best computer-science minds in academia: The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur the development of something that looks almost exactly like Google. Brin’s breakthrough research on page ranking by tracking user queries and linking them to the many searches conducted—essentially identifying “birds of a feather”—was largely the aim of the intelligence community’s MDDS program. And Google succeeded beyond their wildest dreams.

The intelligence communitys enduring legacy within Silicon Valley

Digital privacy concerns over the intersection between the intelligence community and commercial technology giants have grown in recent years. But most people still don’t understand the degree to which the intelligence community relies on the world’s biggest science and tech companies for its counter-terrorism and national-security work.

Civil-liberty advocacy groups have aired their privacy concerns for years, especially as they now relate to the Patriot Act. “Hastily passed 45 days after 9/11 in the name of national security, the Patriot Act was the first of many changes to surveillance laws that made it easier for the government to spy on ordinary Americans by expanding the authority to monitor phone and email communications, collect bank and credit reporting records, and track the activity of innocent Americans on the Internet,” says the ACLU. “While most Americans think it was created to catch terrorists, the Patriot Act actually turns regular citizens into suspects.”

When asked, the biggest technology and communications companies—from Verizon and AT&T to Google, Facebook, and Microsoft—say that they never deliberately and proactively offer up their vast databases on their customers to federal security and law enforcement agencies: They say that they only respond to subpoenas or requests that are filed properly under the terms of the Patriot Act.

But even a cursory glance through recent public records shows that there is a treadmill of constant requests that could undermine the intent behind this privacy promise. According to the data-request records that the companies make available to the public, in the most recent reporting period between 2016 and 2017, local, state and federal government authorities seeking information related to national security, counter-terrorism or criminal concerns issued more than 260,000 subpoenas, court orders, warrants, and other legal requests to Verizon, more than 250,000 such requests to AT&T, and nearly 24,000 subpoenas, search warrants, or court orders to Google. Direct national security or counter-terrorism requests are a small fraction of this overall group of requests, but the Patriot Act legal process has now become so routinized that the companies each have a group of employees who simply take care of the stream of requests.

In this way, the collaboration between the intelligence community and big, commercial science and tech companies has been wildly successful. When national security agencies need to identify and track people and groups, they know where to turn – and do so frequently. That was the goal in the beginning. It has succeeded perhaps more than anyone could have imagined at the time.

CLICK HERE TO WATCH BOOK PRESENTATION BY THE AUTHOR

FFW to 2020

From DARPA to Google: How the Military Kickstarted AV Development

 27 Feb 2020

FromDarpatoGoogle

The Stanford Racing Team

by Arrow Mag, Feb 2020

Sebastian Thrun was entertaining the idea of self-driving cars for many years. Born and raised in Germany, he was fascinated with the power and performance of German cars. Things changed in 1986, when he was 18, when his best friend died in a car crash because the driver, another friend, was going too fast on his new Audi Quattro.

As a student at the University of Bonn, Thrun developed several autonomous robotic systems that earned him international recognition. At the time, Thrun was convinced that self-driving cars would soon make transportation safer, avoiding crashes like the one that took his friend’s life.

In 1998, he became an assistant professor and co-director of the Robot Learning Laboratory at Carnegie Mellon University. In July 2003, Thrun left Carnegie Mellon for Stanford University, soon after the first DARPA Grand Challenge was announced. Before accepting the new position, he asked Red Whittaker, the leader of the CMU robotics department, to join the team developing the vehicle for the DARPA race. Whittaker declined. After moving to California, Thrun joined the Stanford Racing Team.

On Oct. 8, 2005, the Stanford Racing Team won $2 million for being the first team to complete the 132-mile DARPA Grand Challenge course in California’s Mojave Desert. Their robot car, “Stanley,” finished in just under 6 hours and 54 minutes and averaged over 19 mph on the course.

Google’s Page wanted to develop self-driving cars

Two years after the third Grand Challenge, Google co-founder Larry Page called Thrun, wanting to turn the experience of the DARPA races into a product for the masses.

When Page first approached Thrun about building a self-driving car that people could use on the real roads, Thrun told him it couldn’t be done.

But Page had a vision, and he would not abandon his quest for an autonomous vehicle.

Thrun recalled that a short time later, Page came back to him and said, “OK, you say it can’t be done. You’re the expert. I trust you. So I can explain to Sergey [Brin] why it can’t be done, can you give me a technical reason why it can’t be done?”

Finally, Thrun accepted Page’s offer and, in 2009, started Project Chauffeur, which began as the Google self-driving car project.

The Google 101,000-Mile Challenge

To develop the technology for Google’s self-driving car, Thrun called Urmson and offered him the position of chief technical officer of the project.

To encourage the team to build a vehicle, and its systems, to drive on any public road, Page created two challenges, with big cash rewards for the entire team: a 1,000-mile challenge to show that Project Chauffeur’s car could drive in several situations, including highways and the streets of San Francisco, and another 100,000-mile challenge to show that driverless cars could be a reality in a few years.

By the middle of 2011, Project Chauffeur engineers completed the two challenges.

In 2016, the Google self-driving car project became Waymo, a “spinoff under Alphabet as a self-driving technology company with a mission to make it safe and easy for people and things to move around.”

Urmson led Google’s self-driving car project for nearly eight years. Under his leadership, Google vehicles accumulated 1.8 million miles of test driving.

In 2018, Waymo One, the first fully self-driving vehicle taxi service, began in Phoenix, Arizona.

From Waymo to Aurora

In 2016, after finishing development of the production-ready version of Waymo’s self-driving technology, Urmson left Google to start Aurora Innovation, a startup backed by Amazon, aiming to provide the full-stack solution for self-driving vehicles.

Urmson believes that in 20 years, we’ll see much of the transportation infrastructure move over to automation. – Arrow.com

TO BE CONTINUED

Here’s a peek into the next episode:

Facebook Hired a Former DARPA Head To Lead An Ambitious New Research Lab

Source: TIME | by VICTOR LUCKERSON

If you need another sign that Facebook’s world-dominating ambitions are just getting started, here’s one: the Menlo Park, Calif. company has hired a former DARPA chief to lead its new research lab.

Facebook CEO Mark Zuckerberg announced April 14 that Regina Dugan will guide Building 8, a new research group developing hardware projects that advance the company’s efforts in virtual reality, augmented reality, artificial intelligence and global connectivity.

Dugan served as the head of the Pentagon’s Defense Advanced Research Projects Agency from 2009 and 2012. Most recently, she led Google’s Advanced Technology and Projects Lab, a highly experimental arm of the company responsible for developing new hardware and software products on a strict two-year timetable.

To be continued?
Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production.
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

Comments are closed.