Graphene is the new asbestos. Plus injectable and mandatory.
The rest Of the graphene oxide story is here, if you need more background, this post is a result of that investigation

NOTE: A needed clarification solicited by some readers:
Yes, we knew of GRAPHENE COATING on masks in May, as seen below, which is horrible enough, even more so since not many followed Canada’s example in banning it.
What this article brings new is a confirmation for GRAPHENE OXYDE, which is not very different in properties and health impact, but seems to be specific to these mRNA jabs, and so we complete the new revelations on graphene oxide and vaccines from La Quinta Columna.

OOPS!

The World’s First Anti-Coronavirus Surgical Mask by Wakamono

By Dr. Priyom Bose, Ph.D. Sep 30 2020

Image Credit: Dragana Gordic/Shutterstock.com

In December 2019, a novel coronavirus (SARS-CoV-2) was first detected in Wuhan, in China’s Hubei province. On 11 March 2020, the World Health Organization (WHO) acknowledged and characterized the condition as a pandemic owing to the rapid spread of the virus across the globe infecting millions of individuals. Scientists are fighting tirelessly to find out ways to curb the spread of the virus and eradicate it.

SARS-CoV-2 is regarded as highly contagious and spreads rapidly through person-to-person contact. When an infected person sneezes or coughs, their respiratory droplets can easily infect a healthy individual. Besides enforcing social distancing, common citizens are encouraged to wear face masks to prevent droplets from getting through the air and infecting others.

Despite the efficiency of N95, a respiratory protective device, to filter out 95% of particles (≥0.3 μm), surgical facemasks are single-use, expensive, and often ill-fitting, which significantly reduces their effectiveness. Nanoscience researchers have envisioned a new respirator facemask that would be highly efficient, recyclable, customizable, reusable, and have antimicrobial and antiviral properties.

Nanotechnology in the Production of Surgical Masks

Nanoparticles are extensively used for their novel properties in various fields of science and technology.

In the current pandemic situation, scientists have adopted this technology to produce the most efficient masks. Researchers have used a novel electrospinning technology in the production of nanofiber membranes. These nanofiber membranes are designed to have various regulating properties such as fiber diameter, porosity ratio, and many other microstructural factors that could be utilized to produce high-quality face masks. Researchers in Egypt have developed face masks using nanotechnology with the help of the following components:

Polylactic acid

This transparent polymeric material is derived from starch and carbohydrate. It has high elasticity and is biodegradable. Researchers found that electrospun polylactic acid membranes possess high prospects for the production of filters efficient in the isolation of environmental pollutants, such as atmospheric aerosol and submicron particulates dispersed in the air.

Despite its various biomedical applications (implant prostheses, catheters, tissue scaffolds, etc.), these polylactic membranes are brittle. Therefore, applying frequent pressure during their usage could produce cracks that would make them permeable to viral particles. However, this mechanical drawback can be fixed using other supportive nanoparticles that could impart mechanical strength, antimicrobial and antiviral properties, which are important in making face masks effective in the current pandemic situation.

Copper oxide nanoparticles

These nanoparticles have many biomedical applications, for example, infection control, as they can inhibit the growth of microorganisms (fungi, bacteria) and viruses. It has also been reported that SARS-CoV-2 has lower stability on the metallic copper surface than other materials, such as plastic or stainless steel. Therefore, the integration of copper oxide nanoparticles in a nanofibrous polymeric filtration system would significantly prevent microbial adherence onto the membrane.  

Graphene oxide nanoparticles

These nanoparticles possess exceptional properties, such as high toughness, superior electrical conductivity, biocompatibility, and antiviral and antibacterial activity. Such nanoparticles could be utilized in the production of masks.

Cellulose acetate

This is a semi-synthetic polymer derived from cellulose. It is used in ultrafiltration because of its biocompatibility, high selectivity, and low cost. It is also used in protective clothing, tissue engineering, and nanocomposite applications.

With the help of the aforesaid components, researchers in Egypt have designed a novel respirator filter mask against SARS-CoV-2. This mask is based on a disposable filter piece composed of the unwoven nanofibers comprising multilayers of a) copper oxide nanoparticles, graphene oxide nanoparticles, and polylactic acid, or b) copper oxide nanoparticles, graphene oxide nanoparticles, and cellulose acetate, with the help of electrospun technology and high-power ultrasonication. These facemasks are reusable, i.e., washable in water and could be sterilized using an ultraviolet lamp (λ = 250 nm).

SOURCE
WORKING TO GET CONFIRMATION FROM THESE GUYS TOO
SOURCE

Graphene-coated face masks: COVID-19 miracle or another health risk?

by C. Michael White, The Conversation

mask
Credit: Pixabay/CC0 Public Domain

As a COVID-19 and medical device researcher, I understand the importance of face masks to prevent the spread of the coronavirus. So I am intrigued that some mask manufacturers have begun adding graphene coatings to their face masks to inactivate the virus. Many viruses, fungi and bacteria are incapacitated by graphene in laboratory studies, including feline coronavirus.

Because SARS CoV-2, the coronavirus that causes COVID-19, can survive on the outer surface of a face mask for days, people who touch the mask and then rub their eyes, nose, or mouth may risk getting COVID-19. So these manufacturers seem to be reasoning that graphene coatings on their reusable and disposable face masks will add some anti-virus protection. But in March, the Quebec provincial government removed these masks from schools and daycare centers after Health Canada, Canada’s national public health agency, warned that inhaling the graphene could lead to asbestos-like lung damage.

Is this move warranted by the facts, or an over-reaction? To answer that question, it can help to know more about what graphene is, how it kills microbes, including the SARS-COV-2 virus, and what scientists know so far about the potential health impacts of breathing in graphene.

How does graphene damage viruses, bacteria and human cells?

Graphene is a thin but strong and conductive two-dimensional sheet of carbon atoms. There are three ways that it can help prevent the spread of microbes:

  • Microscopic graphene particles have sharp edges that mechanically damage viruses and cells as they pass by them.
  • Graphene is negatively charged with highly mobile electrons that electrostaticly trap and inactivate some viruses and cells.
  • Graphene causes cells to generate oxygen free radicals that can damage them and impairs their cellular metabolism.
Dr Joe Schwarcz explains why Canada banned graphene masks. Doesn’t say why other countries didn’t. When two governments have opposing views on a poison, one is criminally wrong and someone has to pay.

Why graphene may be linked to lung injury

Researchers have been studying the potential negative impacts of inhaling microscopic graphene on mammals. In one 2016 experiment, mice with graphene placed in their lungs experienced localized lung tissue damage, inflammation, formation of granulomas (where the body tries to wall off the graphene), and persistent lung injury, similar to what occurs when humans inhale asbestos. A different study from 2013 found that when human cells were bound to graphene, the cells were damaged.

In order to mimic human lungs, scientists have developed biological models designed to simulate the impact of high concentration aerosolized graphene—graphene in the form of a fine spray or suspension in air—on industrial workers. One such study published in March 2020 found that a lifetime of industrial exposure to graphene induced inflammation and weakened the simulated lungs’ protective barrier.

It’s important to note that these models are not perfect options for studying the dramatically lower levels of graphene inhaled from a face mask, but researchers have used them in the past to learn more about these sorts of exposures. A study from 2016 found that a small portion of aerosolized graphene nanoparticles could move down a simulated mouth and nose passages and penetrate into the lungs. A 2018 study found that brief exposure to a lower amount of aerosolized graphene did not notably damage lung cells in a model.

From my perspective as a researcher, this trio of findings suggest that a little bit of graphene in the lungs is likely OK, but a lot is dangerous.

Although it might seem obvious to compare inhaling graphene to the well-known harms of breathing in asbestos, the two substances behave differently in one key way. The body’s natural system for disposing of foreign particles cannot remove asbestos, which is why long-term exposure to asbestos can lead to the cancer mesothelioma. But in studies using mouse models to measure the impact of high dose lung exposure to graphene, the body’s natural disposal system does remove the graphene, although it occurs very slowly over 30 to 90 days.

The findings of these studies shed light on the possible health impacts of breathing in microscopic graphene in either small or large doses. However, these models don’t reflect the full complexity of human experiences. So the strength of the evidence about either the benefit of wearing a graphene mask, or the harm of inhaling microscopic graphene as a result of wearing it, is very weak.

No obvious benefit but theoretical risk

Graphene is an intriguing scientific advance that may speed up the demise of COVID-19 virus particles on a face mask. In exchange for this unknown level of added protection, there is a theoretical risk that breathing through a graphene-coated mask will liberate graphene particles that make it through the other filter layers on the mask and penetrate into the lung. If inhaled, the body may not remove these particles rapidly enough to prevent lung damage.

The health department in Quebec is erring on the side of caution. Children are at very low risk of COVID-19 mortality or hospitalization, although they may infect others, so the theoretical risk from graphene exposure is too great. However, adults at high immediate risk of harm from contracting COVID-19 may choose to accept a small theoretical risk of long-term lung damage from graphene in exchange for these potential benefits.

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

“The biggest conspiracies happen in open sight” – Edward Snowden

Segment taken from this show

The Development, Concepts and Doctrine Centre (DCDC) has worked in partnership with the German Bundeswehr Office for Defence Planning to understand the future implications of human augmentation (HA), setting the foundation for more detailed Defence research and development.

The project incorporates research from German, Swedish, Finnish and UK Defence specialists to understand how emerging technologies such as genetic engineering, bioinformatics and the possibility of brain-computer interfaces could affect the future of society, security and Defence. The ethical, moral and legal challenges are complex and must be thoroughly considered, but HA could signal the coming of a new era of strategic advantage with possible implications across the force development spectrum.

HA technologies provides a broad sense of opportunities for today and in the future. There are mature technologies that could be integrated today with manageable policy considerations, such as personalised nutrition, wearables and exoskeletons. There are other technologies in the future with promises of bigger potential such as genetic engineering and brain-computer interfaces. The ethical, moral and legal implications of HA are hard to foresee but early and regular engagement with these issues lie at the heart of success.

HA will become increasingly relevant in the future because it is the binding agent between the unique skills of humans and machines. The winners of future wars will not be those with the most advanced technology, but those who can most effectively integrate the unique skills of both human and machine.

The growing significance of human-machine teaming is already widely acknowledged but this has so far been discussed from a technology-centric perspective. This HA project represents the missing part of the puzzle.

Disclaimer

The content of this publication does not represent the official policy or strategy of the UK government or that of the UK’s Ministry of Defense (MOD).

Furthermore, the analysis and findings do not represent the official policy or strategy of the countries contributing to the project.

It does, however, represent the view of the Development, Concepts and Doctrine Centre (DCDC), a department within the UK MOD, and Bundeswehr Office for Defence Planning (BODP), a department within the German Federal Ministry of Defence. It is based on combining current knowledge and wisdom from subject matter experts with assessments of potential progress in technologies 30 years out supporting deliberations and deductions for future humans and society. Published 13 May 2021 – UK DEFENSE WEBSITE

That disclaimer is a load of bollocks that means nothing, really, but covers the Ministry from some legal liabilities, just in case. You can totally ignore it. – Silview.media

GERMAN DEFENSE WEBSITE

People commented on that artist rendition: “They replaced the hand of God with a robotic one”. I answered: “No, they replaced your hand. Read up!”

Meanwhile, in Canada:

SOURCE

The US Department of Defense has something similar going on, but it doesn’t target the general population in presentations. However, if you input “DARPA” in our search utility, you find out DoD has been going same direction for decades.

DOWNLOAD PDF

If you’ve been around for a while, this should come as no surprise. The numbers in the headline below are now outdated, but not the info

SOURCE

At least US has the decency to pretend these are for military use only, I know they all are meant to be used on the general population, but I don’t know any other open admission of civillian use before.

DEMOCRACY? WE’RE OFFICIALLY 15 MONTHS INTO THE 4TH INDUSTRIAL REVOLUTION AND YOUR GOVERNMENT TOLD YOU NOTHING

This…

… perfectly overlaps on this:

Does this guy shock you that much now, or does he fall in line like the perfect Tetris piece that he is, “another brick in the wall”?

Now remember mRNA therapies are “information therapies” and these injections are the perfect tools for achieving the above goals.

Anyone remember the plebs ever being consulted on their future evolution, or are they just SUBJECTED to it, like slaves to selective breeding?!

You read this because some of my readers are generous enough to help us survive, and at least as hungry for truth as we are, basically the best readers I could hope for. Such as Corinne, who we should thank for pulling my sleeve about this one! If you’re on Gab (which you should), follow her, she has tons of great info to share every day!

DEVELOPING STORY, TO BE CONTINUED, SO BE BACK HERE SOON

ALSO READ: BOMBSHELL! 5G NETWORK TO WIRELESSLY POWER DEVICES. GUESS WHAT IT CAN DO TO NANOTECH (DARPA-FINANCED)

OBAMA, DARPA, GSK AND ROCKEFELLER’S $4.5B B.R.A.I.N. INITIATIVE – BETTER SIT WHEN YOU READ

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

For years, the Pentagon tried to convince the public that they work on your dream secretary. Can you believe that?
Funny how much those plans looked just like today’s Google and Facebook. But it’s not just the looks, it’s also the money, the timeline and the personal connections.
Funnier how the funding scheme was often similar to the one used for Wuhan, with proxy organizations used as middlemen.

WIRED 05.20.2003

A Spy Machine of DARPA’s Dreams

IT’S A MEMORY aid! A robotic assistant! An epidemic detector! An all-seeing, ultra-intrusive spying program!

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable.

What national security experts and civil libertarians want to know is, why would the Defense Department want to do such a thing?

The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read.

All of this — and more — would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health.

This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life,” to see exactly how a relationship or events developed, according to a briefing from the Defense Advanced Projects Research Agency, LifeLog’s sponsor.

Someone with access to the database could “retrieve a specific thread of past transactions, or recall an experience from a few seconds ago or from many years earlier … by using a search-engine interface.”

On the surface, the project seems like the latest in a long line of DARPA’s “blue sky” research efforts, most of which never make it out of the lab. But DARPA is currently asking businesses and universities for research proposals to begin moving LifeLog forward. And some people, such as Steven Aftergood, a defense analyst with the Federation of American Scientists, are worried.News of the future, now.

With its controversial Total Information Awareness database project, DARPA already is planning to track all of an individual’s “transactional data” — like what we buy and who gets our e-mail.

While the parameters of the project have not yet been determined, Aftergood said he believes LifeLog could go far beyond TIA’s scope, adding physical information (like how we feel) and media data (like what we read) to this transactional data.

“LifeLog has the potential to become something like ‘TIA cubed,'” he said.

In the private sector, a number of LifeLog-like efforts already are underway to digitally archive one’s life — to create a “surrogate memory,” as minicomputer pioneer Gordon Bell calls it.

Bell, now with Microsoft, scans all his letters and memos, records his conversations, saves all the Web pages he’s visited and e-mails he’s received and puts them into an electronic storehouse dubbed MyLifeBits.

DARPA’s LifeLog would take this concept several steps further by tracking where people go and what they see.

That makes the project similar to the work of University of Toronto professor Steve Mann. Since his teen years in the 1970s, Mann, a self-styled “cyborg,” has worn a camera and an array of sensors to record his existence. He claims he’s convinced 20 to 30 of his current and former students to do the same. It’s all part of an experiment into “existential technology” and “the metaphysics of free will.”

DARPA isn’t quite so philosophical about LifeLog. But the agency does see some potential battlefield uses for the program.

“The technology could allow the military to develop computerized assistants for war fighters and commanders that can be more effective because they can easily access the user’s past experiences,” DARPA spokeswoman Jan Walker speculated in an e-mail.

It also could allow the military to develop more efficient computerized training systems, she said: Computers could remember how each student learns and interacts with the training system, then tailor the lessons accordingly.

John Pike, director of defense think tank GlobalSecurity.org, said he finds the explanations “hard to believe.”

“It looks like an outgrowth of Total Information Awareness and other DARPA homeland security surveillance programs,” he added in an e-mail.

Sure, LifeLog could be used to train robotic assistants. But it also could become a way to profile suspected terrorists, said Cory Doctorow, with the Electronic Frontier Foundation. In other words, Osama bin Laden’s agent takes a walk around the block at 10 each morning, buys a bagel and a newspaper at the corner store and then calls his mother. You do the same things — so maybe you’re an al Qaeda member, too!

“The more that an individual’s characteristic behavior patterns — ‘routines, relationships and habits’ — can be represented in digital form, the easier it would become to distinguish among different individuals, or to monitor one,” Aftergood, the Federation of American Scientists analyst, wrote in an e-mail.

In its LifeLog report, DARPA makes some nods to privacy protection, like when it suggests that “properly anonymized access to LifeLog data might support medical research and the early detection of an emerging epidemic.”

But before these grand plans get underway, LifeLog will start small. Right now, DARPA is asking industry and academics to submit proposals for 18-month research efforts, with a possible 24-month extension. (DARPA is not sure yet how much money it will sink into the program.)

The researchers will be the centerpiece of their own study.

Like a game show, winning this DARPA prize eventually will earn the lucky scientists a trip for three to Washington, D.C. Except on this excursion, every participating scientist’s e-mail to the travel agent, every padded bar bill and every mad lunge for a cab will be monitored, categorized and later dissected.

WIRED 07.14.2003

Pentagon Alters LifeLog Project

By Noah Shachtman.

Bending a bit to privacy concerns, the Pentagon changes some of the experiments to be conducted for LifeLog, its effort to record every tidbit of information and encounter in daily life. No video recording of unsuspecting people, for example.

MONDAY IS THE deadline for researchers to submit bids to build the Pentagon’s so-called LifeLog project, an experiment to create an all-encompassing über-diary.

But while teams of academics and entrepreneurs are jostling for the 18- to 24-month grants to work on the program, the Defense Department has changed the parameters of the project to respond to a tide of privacy concerns.

Lifelog is the Defense Advanced Research Projects Agency’s effort to gather every conceivable element of a person’s life, dump it all into a database, and spin the information into narrative threads that trace relationships, events and experiences.

It’s an attempt, some say, to make a kind of surrogate, digitized memory.

“My father was a stroke victim, and he lost the ability to record short-term memories,” said Howard Shrobe, an MIT computer scientist who’s leading a team of professors and researchers in a LifeLog bid. “If you ever saw the movie Memento, he had that. So I’m interested in seeing how memory works after seeing a broken one. LifeLog is a chance to do that.”

Researchers who receive LifeLog grants will be required to test the system on themselves. Cameras will record everything they do during a trip to Washington, D.C., and global-positioning satellite locators will track where they go. Biomedical sensors will monitor their health. All the e-mail they send, all the magazines they read, all the credit card payments they make will be indexed and made searchable.

By capturing experiences, Darpa claims that LifeLog could help develop more realistic computerized training programs and robotic assistants for battlefield commanders.

Defense analysts and civil libertarians, on the other hand, worry that the program is another piece in an ongoing Pentagon effort to keep tabs on American citizens. LifeLog could become the ultimate profiling tool, they fear.

A firestorm of criticism ignited after LifeLog first became public in May. Some potential bidders for the LifeLog contract dropped out as a result.

“I’m interested in LifeLog, but I’m going to shy away from it,” said Les Vogel, a computer science researcher in Maui, Hawaii. “Who wants to get in the middle of something that gets that much bad press?”

New York Times columnist William Safire noted that while LifeLog researchers might be comfortable recording their lives, the people that the LifeLoggers are “looking at, listening to, sniffing or conspiring with to blow up the world” might not be so thrilled about turning over some of their private interchanges to the Pentagon.

In response, Darpa changed the LifeLog proposal request. Now: “LifeLog researchers shall not capture imagery or audio of any person without that person’s a priori express permission. In fact, it is desired that capture of imagery or audio of any person other than the user be avoided even if a priori permission is granted.”

Steven Aftergood, with the Federation of American Scientists, sees the alterations as evidence that Darpa proposals must receive a thorough public vetting.

“Darpa doesn’t spontaneously modify their programs in this way,” he said. “It requires public criticism. Give them credit, however, for acknowledging public concerns.”

But not too much, said John Pike, director of GlobalSecurity.org.

“Darpa adds these contractual provisions to appear to be above suspicion,” Pike said. “But if you can put them in, you can take them out.”

WIRED 07.29.2003

Helping Machines Think Different

By Noah Shachtman.

While the Pentagon’s project to record and catalog a person’s life scares privacy advocates, researchers see it as a step in the process of getting computers to think like humans.

TO PENTAGON RESEARCHERS, capturing and categorizing every aspect of a person’s life is only the beginning.

LifeLog — the controversial Defense Department initiative to track everything about an individual — is just one step in a larger effort, according to a top Pentagon research director. Personalized digital assistants that can guess our desires should come first. And then, just maybe, we’ll see computers that can think for themselves.

Computer scientists have dreamed for decades of building machines with minds of their own. But these hopes have been overwhelmed again and again by the messy, dizzying complexities of the real world.

In recent months, the Defense Advanced Research Projects Agency has launched a series of seemingly disparate programs — all designed, the agency says, to help computers deal with the complexities of life, so they finally can begin to think.

“Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks. These systems will need to reason, learn and respond intelligently to things they’ve never encountered before,” said Ron Brachman, the recently installed chief of Darpa’s Information Processing Technology Office, or IPTO. A former senior executive at AT&T Labs, Brachman was elected president of the American Association for Artificial Intelligence last year.

LifeLog is the best-known of these projects. The controversial program intends to record everything about a person — what he sees, where he goes, how he feels — and dump it into a database. Once captured, the information is supposed to be spun into narrative threads that trace relationships, events and experiences.

For years, researchers have been able to get programs to make sense of limited, tightly proscribed situations. Navigating outside of the lab has been much more difficult. Until recently, even getting a robot to walk across the room on its own was a tricky task.

“LifeLog is about forcing computers into the real world,” said leading artificial intelligence researcher Doug Lenat, who’s bidding on the project.

What LifeLog is not, Brachman asserts, is a program to track terrorists. By capturing so much information about an individual, and by combing relationships and traits out of that data, LifeLog appears to some civil libertarians to be an almost limitless tool for profiling potential enemies of the state. Concerns over the Terrorism Information Awareness database effort have only heightened sensitivities.

“These technologies developed by the military have obvious, easy paths to Homeland Security deployments,” said Lee Tien, with the Electronic Frontier Foundation.

Brachman said it is “up to military leaders to decide how to use our technology in support of their mission,” but he repeatedly insisted that IPTO has “absolutely no interest or intention of using any of our technology for profiling.”

What Brachman does want to do is create a computerized assistant that can learn about the habits and wishes of its human boss. And the first step toward this goal is for machines to start seeing, and remembering, life like people do.

Human beings don’t dump their experiences into some formless database or tag them with a couple of keywords. They divide their lives into discreet installments — “college,” “my first date,” “last Thursday.” Researchers call this “episodic memory.”

LifeLog is about trying to install episodic memory into computers, Brachman said. It’s about getting machines to start “remembering experiences in the commonsensical way we do — a vacation in Bermuda, a taxi ride to the airport.”

IPTO recently handed out $29 million in research grants to create a Perceptive Assistant that Learns, or PAL, that can draw on these episodes and improve itself in the process. If people keep missing conferences during rush hour, PAL should learn to schedule meetings when traffic isn’t as thick. If PAL’s boss keeps sending angry notes to spammers, the software secretary eventually should just start flaming on its own.

In the 1980s, artificial intelligence researchers promised to create programs that could do just that. Darpa even promoted a thinking “pilot’s associate — a kind of R2D2,” said Alex Roland, author of The Race for Machine Intelligence: Darpa, DoD, and the Strategic Computing Initiative.

But the field “fell on its face,” according to University of Washington computer scientist Henry Kautz. Instead of trying to teach computers how to reason on their own, “we said, ‘Well, if we just keep adding more rules, we could cover every case imaginable.'”

It’s an impossible task, of course. Every circumstance is different, and there will never be enough to stipulations to cover them all.

A few computer programs, with enough training from their human masters, can make some assumptions about new situations on their own, however. Amazon.com’s system for recommending books and music is one of these.

But these efforts are limited, too. Everyone’s received downright kooky suggestions from that Amazon program.

Overcoming these limitations requires a combination of logical approaches. That’s a goal behind IPTO’s new call for research into computers that can handle real-world reasoning.

It’s one of several problems Brachman said are “absolutely imperative” to solve as quickly as possible.

Although computer systems are getting more complicated every day, this complexity “may be actually reversing the information revolution,” he noted in a recent presentation (PDF). “Systems have grown more rigid, more fragile and increasingly open to attack.”

What’s needed, he asserts, is a computer network that can teach itself new capabilities, without having to be reprogrammed every time. Computers should be able to adapt to how its users like to work, spot when they’re being attacked and develop responses to these assaults. Think of it like the body’s immune system — or like a battlefield general.

But to act more like a person, a computer has to soak up its own experiences, like a human being does. It has to create a catalog of its existence. A LifeLog, if you will.

WIRED 02.04.2004

Pentagon Kills LifeLog Project

THE PENTAGON CANCELED its so-called LifeLog project, an ambitious effort to build a database tracking a person’s entire existence.

Run by Darpa, the Defense Department’s research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, the plane tickets bought, the e-mail sent and received. Out of this seemingly endless ocean of information, computer scientists would plot distinctive routes in the data, mapping relationships, memories, events and experiences.

LifeLog’s backers said the all-encompassing diary could have turned into a near-perfect digital memory, giving its users computerized assistants with an almost flawless recall of what they had done in the past. But civil libertarians immediately pounced on the project when it debuted last spring, arguing that LifeLog could become the ultimate tool for profiling potential enemies of the state.

Researchers close to the project say they’re not sure why it was dropped late last month. Darpa hasn’t provided an explanation for LifeLog’s quiet cancellation. “A change in priorities” is the only rationale agency spokeswoman Jan Walker gave to Wired News.

However, related Darpa efforts concerning software secretaries and mechanical brains are still moving ahead as planned.

LifeLog is the latest in a series of controversial programs that have been canceled by Darpa in recent months. The Terrorism Information Awareness, or TIA, data-mining initiative was eliminated by Congress — although many analysts believe its research continues on the classified side of the Pentagon’s ledger. The Policy Analysis Market (or FutureMap), which provided a stock market of sorts for people to bet on terror strikes, was almost immediately withdrawn after its details came to light in July.

“I’ve always thought (LifeLog) would be the third program (after TIA and FutureMap) that could raise eyebrows if they didn’t make it clear how privacy concerns would be met,” said Peter Harsha, director of government affairs for the Computing Research Association.

“Darpa’s pretty gun-shy now,” added Lee Tien, with the Electronic Frontier Foundation, which has been critical of many agency efforts. “After TIA, they discovered they weren’t ready to deal with the firestorm of criticism.”

That’s too bad, artificial-intelligence researchers say. LifeLog would have addressed one of the key issues in developing computers that can think: how to take the unstructured mess of life, and recall it as discreet episodes — a trip to Washington, a sushi dinner, construction of a house.

“Obviously we’re quite disappointed,” said Howard Shrobe, who led a team from the Massachusetts Institute of Technology Artificial Intelligence Laboratory which spent weeks preparing a bid for a LifeLog contract. “We were very interested in the research focus of the program … how to help a person capture and organize his or her experience. This is a theme with great importance to both AI and cognitive science.”

To Tien, the project’s cancellation means “it’s just not tenable for Darpa to say anymore, ‘We’re just doing the technology, we have no responsibility for how it’s used.'”

Private-sector research in this area is proceeding. At Microsoft, for example, minicomputer pioneer Gordon Bell’s program, MyLifeBits, continues to develop ways to sort and store memories.

David Karger, Shrobe’s colleague at MIT, thinks such efforts will still go on at Darpa, too.

“I am sure that such research will continue to be funded under some other title,” wrote Karger in an e-mail. “I can’t imagine Darpa ‘dropping out’ of such a key research area.”

MEANWHILE…

Google: seeded by the Pentagon

By dr. Nafeez Ahmed

In 1994 — the same year the Highlands Forum was founded under the stewardship of the Office of the Secretary of Defense, the ONA, and DARPA — two young PhD students at Stanford University, Sergey Brin and Larry Page, made their breakthrough on the first automated web crawling and page ranking application. That application remains the core component of what eventually became Google’s search service. Brin and Page had performed their work with funding from the Digital Library Initiative (DLI), a multi-agency programme of the National Science Foundation (NSF), NASA and DARPA.

But that’s just one side of the story.

Min 6:44!


Also check: OBAMA, DARPA, GSK AND ROCKEFELLER’S $4.5B B.R.A.I.N. INITIATIVE – BETTER SIT WHEN YOU READ

Throughout the development of the search engine, Sergey Brin reported regularly and directly to two people who were not Stanford faculty at all: Dr. Bhavani Thuraisingham and Dr. Rick Steinheiser. Both were representatives of a sensitive US intelligence community research programme on information security and data-mining.

Thuraisingham is currently the Louis A. Beecherl distinguished professor and executive director of the Cyber Security Research Institute at the University of Texas, Dallas, and a sought-after expert on data-mining, data management and information security issues. But in the 1990s, she worked for the MITRE Corp., a leading US defense contractor, where she managed the Massive Digital Data Systems initiative, a project sponsored by the NSA, CIA, and the Director of Central Intelligence, to foster innovative research in information technology.

“We funded Stanford University through the computer scientist Jeffrey Ullman, who had several promising graduate students working on many exciting areas,” Prof. Thuraisingham told me. “One of them was Sergey Brin, the founder of Google. The intelligence community’s MDDS program essentially provided Brin seed-funding, which was supplemented by many other sources, including the private sector.”

This sort of funding is certainly not unusual, and Sergey Brin’s being able to receive it by being a graduate student at Stanford appears to have been incidental. The Pentagon was all over computer science research at this time. But it illustrates how deeply entrenched the culture of Silicon Valley is in the values of the US intelligence community.

In an extraordinary document hosted by the website of the University of Texas, Thuraisingham recounts that from 1993 to 1999, “the Intelligence Community [IC] started a program called Massive Digital Data Systems (MDDS) that I was managing for the Intelligence Community when I was at the MITRE Corporation.” The program funded 15 research efforts at various universities, including Stanford. Its goal was developing “data management technologies to manage several terabytes to petabytes of data,” including for “query processing, transaction management, metadata management, storage management, and data integration.”

At the time, Thuraisingham was chief scientist for data and information management at MITRE, where she led team research and development efforts for the NSA, CIA, US Air Force Research Laboratory, as well as the US Navy’s Space and Naval Warfare Systems Command (SPAWAR) and Communications and Electronic Command (CECOM). She went on to teach courses for US government officials and defense contractors on data-mining in counter-terrorism.

In her University of Texas article, she attaches the copy of an abstract of the US intelligence community’s MDDS program that had been presented to the “Annual Intelligence Community Symposium” in 1995. The abstract reveals that the primary sponsors of the MDDS programme were three agencies: the NSA, the CIA’s Office of Research & Development, and the intelligence community’s Community Management Staff (CMS) which operates under the Director of Central Intelligence. Administrators of the program, which provided funding of around 3–4 million dollars per year for 3–4 years, were identified as Hal Curran (NSA), Robert Kluttz (CMS), Dr. Claudia Pierce (NSA), Dr. Rick Steinheiser (ORD — standing for the CIA’s Office of Research and Devepment), and Dr. Thuraisingham herself.

Thuraisingham goes on in her article to reiterate that this joint CIA-NSA program partly funded Sergey Brin to develop the core of Google, through a grant to Stanford managed by Brin’s supervisor Prof. Jeffrey D. Ullman:

“In fact, the Google founder Mr. Sergey Brin was partly funded by this program while he was a PhD student at Stanford. He together with his advisor Prof. Jeffrey Ullman and my colleague at MITRE, Dr. Chris Clifton [Mitre’s chief scientist in IT], developed the Query Flocks System which produced solutions for mining large amounts of data stored in databases. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community and Mr. Brin would rush in on roller blades, give his presentation and rush out. In fact the last time we met in September 1998, Mr. Brin demonstrated to us his search engine which became Google soon after.”

Brin and Page officially incorporated Google as a company in September 1998, the very month they last reported to Thuraisingham and Steinheiser. ‘Query Flocks’ was also part of Google’s patented ‘PageRank’ search system, which Brin developed at Stanford under the CIA-NSA-MDDS programme, as well as with funding from the NSF, IBM and Hitachi. That year, MITRE’s Dr. Chris Clifton, who worked under Thuraisingham to develop the ‘Query Flocks’ system, co-authored a paper with Brin’s superviser, Prof. Ullman, and the CIA’s Rick Steinheiser. Titled ‘Knowledge Discovery in Text,’ the paper was presented at an academic conference.

“The MDDS funding that supported Brin was significant as far as seed-funding goes, but it was probably outweighed by the other funding streams,” said Thuraisingham. “The duration of Brin’s funding was around two years or so. In that period, I and my colleagues from the MDDS would visit Stanford to see Brin and monitor his progress every three months or so. We didn’t supervise exactly, but we did want to check progress, point out potential problems and suggest ideas. In those briefings, Brin did present to us on the query flocks research, and also demonstrated to us versions of the Google search engine.”

Brin thus reported to Thuraisingham and Steinheiser regularly about his work developing Google.

==

UPDATE 2.05PM GMT [2nd Feb 2015]:

Since publication of this article, Prof. Thuraisingham has amended her article referenced above. The amended version includes a new modified statement, followed by a copy of the original version of her account of the MDDS. In this amended version, Thuraisingham rejects the idea that CIA funded Google, and says instead:

“In fact Prof. Jeffrey Ullman (at Stanford) and my colleague at MITRE Dr. Chris Clifton together with some others developed the Query Flocks System, as part of MDDS, which produced solutions for mining large amounts of data stored in databases. Also, Mr. Sergey Brin, the cofounder of Google, was part of Prof. Ullman’s research group at that time. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community periodically and Mr. Brin would rush in on roller blades, give his presentation and rush out. During our last visit to Stanford in September 1998, Mr. Brin demonstrated to us his search engine which I believe became Google soon after…

There are also several inaccuracies in Dr. Ahmed’s article (dated January 22, 2015). For example, the MDDS program was not a ‘sensitive’ program as stated by Dr. Ahmed; it was an Unclassified program that funded universities in the US. Furthermore, Sergey Brin never reported to me or to Dr. Rick Steinheiser; he only gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s. Also, MDDS never funded Google; it funded Stanford University.”

Here, there is no substantive factual difference in Thuraisingham’s accounts, other than to assert that her statement associating Sergey Brin with the development of ‘query flocks’ is mistaken. Notably, this acknowledgement is derived not from her own knowledge, but from this very article quoting a comment from a Google spokesperson.

However, the bizarre attempt to disassociate Google from the MDDS program misses the mark. Firstly, the MDDS never funded Google, because during the development of the core components of the Google search engine, there was no company incorporated with that name. The grant was instead provided to Stanford University through Prof. Ullman, through whom some MDDS funding was used to support Brin who was co-developing Google at the time. Secondly, Thuraisingham then adds that Brin never “reported” to her or the CIA’s Steinheiser, but admits he “gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s.” It is unclear, though, what the distinction is here between reporting, and delivering a detailed presentation — either way, Thuraisingham confirms that she and the CIA had taken a keen interest in Brin’s development of Google. Thirdly, Thuraisingham describes the MDDS program as “unclassified,” but this does not contradict its “sensitive” nature. As someone who has worked for decades as an intelligence contractor and advisor, Thuraisingham is surely aware that there are many ways of categorizing intelligence, including ‘sensitive but unclassified.’ A number of former US intelligence officials I spoke to said that the almost total lack of public information on the CIA and NSA’s MDDS initiative suggests that although the progam was not classified, it is likely instead that its contents was considered sensitive, which would explain efforts to minimise transparency about the program and the way it fed back into developing tools for the US intelligence community. Fourthly, and finally, it is important to point out that the MDDS abstract which Thuraisingham includes in her University of Texas document states clearly not only that the Director of Central Intelligence’s CMS, CIA and NSA were the overseers of the MDDS initiative, but that the intended customers of the project were “DoD, IC, and other government organizations”: the Pentagon, the US intelligence community, and other relevant US government agencies.

In other words, the provision of MDDS funding to Brin through Ullman, under the oversight of Thuraisingham and Steinheiser, was fundamentally because they recognized the potential utility of Brin’s work developing Google to the Pentagon, intelligence community, and the federal government at large.

==

The MDDS programme is actually referenced in several papers co-authored by Brin and Page while at Stanford, specifically highlighting its role in financially sponsoring Brin in the development of Google. In their 1998 paper published in the Bulletin of the IEEE Computer Society Technical Committeee on Data Engineering, they describe the automation of methods to extract information from the web via “Dual Iterative Pattern Relation Extraction,” the development of “a global ranking of Web pages called PageRank,” and the use of PageRank “to develop a novel search engine called Google.” Through an opening footnote, Sergey Brin confirms he was “Partially supported by the Community Management Staff’s Massive Digital Data Systems Program, NSF grant IRI-96–31952” — confirming that Brin’s work developing Google was indeed partly-funded by the CIA-NSA-MDDS program.

This NSF grant identified alongside the MDDS, whose project report lists Brin among the students supported (without mentioning the MDDS), was different to the NSF grant to Larry Page that included funding from DARPA and NASA. The project report, authored by Brin’s supervisor Prof. Ullman, goes on to say under the section ‘Indications of Success’ that “there are some new stories of startups based on NSF-supported research.” Under ‘Project Impact,’ the report remarks: “Finally, the google project has also gone commercial as Google.com.”

Thuraisingham’s account, including her new amended version, therefore demonstrates that the CIA-NSA-MDDS program was not only partly funding Brin throughout his work with Larry Page developing Google, but that senior US intelligence representatives including a CIA official oversaw the evolution of Google in this pre-launch phase, all the way until the company was ready to be officially founded. Google, then, had been enabled with a “significant” amount of seed-funding and oversight from the Pentagon: namely, the CIA, NSA, and DARPA.

The DoD could not be reached for comment.

When I asked Prof. Ullman to confirm whether or not Brin was partly funded under the intelligence community’s MDDS program, and whether Ullman was aware that Brin was regularly briefing the CIA’s Rick Steinheiser on his progress in developing the Google search engine, Ullman’s responses were evasive: “May I know whom you represent and why you are interested in these issues? Who are your ‘sources’?” He also denied that Brin played a significant role in developing the ‘query flocks’ system, although it is clear from Brin’s papers that he did draw on that work in co-developing the PageRank system with Page.

When I asked Ullman whether he was denying the US intelligence community’s role in supporting Brin during the development of Google, he said: “I am not going to dignify this nonsense with a denial. If you won’t explain what your theory is, and what point you are trying to make, I am not going to help you in the slightest.”

The MDDS abstract published online at the University of Texas confirms that the rationale for the CIA-NSA project was to “provide seed money to develop data management technologies which are of high-risk and high-pay-off,” including techniques for “querying, browsing, and filtering; transaction processing; accesses methods and indexing; metadata management and data modelling; and integrating heterogeneous databases; as well as developing appropriate architectures.” The ultimate vision of the program was to “provide for the seamless access and fusion of massive amounts of data, information and knowledge in a heterogeneous, real-time environment” for use by the Pentagon, intelligence community and potentially across government.

These revelations corroborate the claims of Robert Steele, former senior CIA officer and a founding civilian deputy director of the Marine Corps Intelligence Activity, whom I interviewed for The Guardian last year on open source intelligence. Citing sources at the CIA, Steele had said in 2006 that Steinheiser, an old colleague of his, was the CIA’s main liaison at Google and had arranged early funding for the pioneering IT firm. At the time, Wired founder John Batelle managed to get this official denial from a Google spokesperson in response to Steele’s assertions:

“The statements related to Google are completely untrue.”

This time round, despite multiple requests and conversations, a Google spokesperson declined to comment.

UPDATE: As of 5.41PM GMT [22nd Jan 2015], Google’s director of corporate communication got in touch and asked me to include the following statement:

“Sergey Brin was not part of the Query Flocks Program at Stanford, nor were any of his projects funded by US Intelligence bodies.”

This is what I wrote back:

My response to that statement would be as follows: Brin himself in his own paper acknowledges funding from the Community Management Staff of the Massive Digital Data Systems (MDDS) initiative, which was supplied through the NSF. The MDDS was an intelligence community program set up by the CIA and NSA. I also have it on record, as noted in the piece, from Prof. Thuraisingham of University of Texas that she managed the MDDS program on behalf of the US intelligence community, and that her and the CIA’s Rick Steinheiser met Brin every three months or so for two years to be briefed on his progress developing Google and PageRank. Whether Brin worked on query flocks or not is neither here nor there.

In that context, you might want to consider the following questions:

1) Does Google deny that Brin’s work was part-funded by the MDDS via an NSF grant?

2) Does Google deny that Brin reported regularly to Thuraisingham and Steinheiser from around 1996 to 1998 until September that year when he presented the Google search engine to them?

LESS KNOWN FACT: AROUND THE SAME YEAR 2004, SERGEY BRIN JOINED WORLD ECONOMIC FORUM’S YOUTH ORGANIZATION, THE “YOUNG GLOBAL LEADERS”

Total Information Awareness

A call for papers for the MDDS was sent out via email list on November 3rd 1993 from senior US intelligence official David Charvonia, director of the research and development coordination office of the intelligence community’s CMS. The reaction from Tatu Ylonen (celebrated inventor of the widely used secure shell [SSH] data protection protocol) to his colleagues on the email list is telling: “Crypto relevance? Makes you think whether you should protect your data.” The email also confirms that defense contractor and Highlands Forum partner, SAIC, was managing the MDDS submission process, with abstracts to be sent to Jackie Booth of the CIA’s Office of Research and Development via a SAIC email address.

By 1997, Thuraisingham reveals, shortly before Google became incorporated and while she was still overseeing the development of its search engine software at Stanford, her thoughts turned to the national security applications of the MDDS program. In the acknowledgements to her book, Web Data Mining and Applications in Business Intelligence and Counter-Terrorism (2003), Thuraisingham writes that she and “Dr. Rick Steinheiser of the CIA, began discussions with Defense Advanced Research Projects Agency on applying data-mining for counter-terrorism,” an idea that resulted directly from the MDDS program which partly funded Google. “These discussions eventually developed into the current EELD (Evidence Extraction and Link Detection) program at DARPA.”

So the very same senior CIA official and CIA-NSA contractor involved in providing the seed-funding for Google were simultaneously contemplating the role of data-mining for counter-terrorism purposes, and were developing ideas for tools actually advanced by DARPA.

Today, as illustrated by her recent oped in the New York Times, Thuraisingham remains a staunch advocate of data-mining for counter-terrorism purposes, but also insists that these methods must be developed by government in cooperation with civil liberties lawyers and privacy advocates to ensure that robust procedures are in place to prevent potential abuse. She points out, damningly, that with the quantity of information being collected, there is a high risk of false positives.

In 1993, when the MDDS program was launched and managed by MITRE Corp. on behalf of the US intelligence community, University of Virginia computer scientist Dr. Anita K. Jones — a MITRE trustee — landed the job of DARPA director and head of research and engineering across the Pentagon. She had been on the board of MITRE since 1988. From 1987 to 1993, Jones simultaneously served on SAIC’s board of directors. As the new head of DARPA from 1993 to 1997, she also co-chaired the Pentagon’s Highlands Forum during the period of Google’s pre-launch development at Stanford under the MDSS.

Thus, when Thuraisingham and Steinheiser were talking to DARPA about the counter-terrorism applications of MDDS research, Jones was DARPA director and Highlands Forum co-chair. That year, Jones left DARPA to return to her post at the University of Virgina. The following year, she joined the board of the National Science Foundation, which of course had also just funded Brin and Page, and also returned to the board of SAIC. When she left DoD, Senator Chuck Robb paid Jones the following tribute : “She brought the technology and operational military communities together to design detailed plans to sustain US dominance on the battlefield into the next century.”

Dr. Anita Jones, head of DARPA from 1993–1997, and co-chair of the Pentagon Highlands Forum from 1995–1997, during which officials in charge of the CIA-NSA-MDSS program were funding Google, and in communication with DARPA about data-mining for counterterrorism

On the board of the National Science Foundation from 1992 to 1998 (including a stint as chairman from 1996) was Richard N. Zare. This was the period in which the NSF sponsored Sergey Brin and Larry Page in association with DARPA. In June 1994, Prof. Zare, a chemist at Stanford, participated with Prof. Jeffrey Ullman (who supervised Sergey Brin’s research), on a panel sponsored by Stanford and the National Research Council discussing the need for scientists to show how their work “ties to national needs.” The panel brought together scientists and policymakers, including “Washington insiders.”

DARPA’s EELD program, inspired by the work of Thuraisingham and Steinheiser under Jones’ watch, was rapidly adapted and integrated with a suite of tools to conduct comprehensive surveillance under the Bush administration.

According to DARPA official Ted Senator, who led the EELD program for the agency’s short-lived Information Awareness Office, EELD was among a range of “promising techniques” being prepared for integration “into the prototype TIA system.” TIA stood for Total Information Awareness, and was the main global electronic eavesdropping and data-mining program deployed by the Bush administration after 9/11. TIA had been set up by Iran-Contra conspirator Admiral John Poindexter, who was appointed in 2002 by Bush to lead DARPA’s new Information Awareness Office.

The Xerox Palo Alto Research Center (PARC) was another contractor among 26 companies (also including SAIC) that received million dollar contracts from DARPA (the specific quantities remained classified) under Poindexter, to push forward the TIA surveillance program in 2002 onwards. The research included “behaviour-based profiling,” “automated detection, identification and tracking” of terrorist activity, among other data-analyzing projects. At this time, PARC’s director and chief scientist was John Seely Brown. Both Brown and Poindexter were Pentagon Highlands Forum participants — Brown on a regular basis until recently.

TIA was purportedly shut down in 2003 due to public opposition after the program was exposed in the media, but the following year Poindexter participated in a Pentagon Highlands Group session in Singapore, alongside defense and security officials from around the world. Meanwhile, Ted Senator continued to manage the EELD program among other data-mining and analysis projects at DARPA until 2006, when he left to become a vice president at SAIC. He is now a SAIC/Leidos technical fellow.

Google, DARPA and the money trail

Long before the appearance of Sergey Brin and Larry Page, Stanford University’s computer science department had a close working relationship with US military intelligence. A letter dated November 5th 1984 from the office of renowned artificial intelligence (AI) expert, Prof Edward Feigenbaum, addressed to Rick Steinheiser, gives the latter directions to Stanford’s Heuristic Programming Project, addressing Steinheiser as a member of the “AI Steering Committee.” A list of attendees at a contractor conference around that time, sponsored by the Pentagon’s Office of Naval Research (ONR), includes Steinheiser as a delegate under the designation “OPNAV Op-115” — which refers to the Office of the Chief of Naval Operations’ program on operational readiness, which played a major role in advancing digital systems for the military.

From the 1970s, Prof. Feigenbaum and his colleagues had been running Stanford’s Heuristic Programming Project under contract with DARPA, continuing through to the 1990s. Feigenbaum alone had received around over $7 million in this period for his work from DARPA, along with other funding from the NSF, NASA, and ONR.

Brin’s supervisor at Stanford, Prof. Jeffrey Ullman, was in 1996 part of a joint funding project of DARPA’s Intelligent Integration of Information program. That year, Ullman co-chaired DARPA-sponsored meetings on data exchange between multiple systems.

In September 1998, the same month that Sergey Brin briefed US intelligence representatives Steinheiser and Thuraisingham, tech entrepreneurs Andreas Bechtolsheim and David Cheriton invested $100,000 each in Google. Both investors were connected to DARPA.

As a Stanford PhD student in electrical engineering in the 1980s, Bechtolsheim’s pioneering SUN workstation project had been funded by DARPA and the Stanford computer science department — this research was the foundation of Bechtolsheim’s establishment of Sun Microsystems, which he co-founded with William Joy.

As for Bechtolsheim’s co-investor in Google, David Cheriton, the latter is a long-time Stanford computer science professor who has an even more entrenched relationship with DARPA. His bio at the University of Alberta, which in November 2014 awarded him an honorary science doctorate, says that Cheriton’s “research has received the support of the US Defense Advanced Research Projects Agency (DARPA) for over 20 years.”

In the meantime, Bechtolsheim left Sun Microsystems in 1995, co-founding Granite Systems with his fellow Google investor Cheriton as a partner. They sold Granite to Cisco Systems in 1996, retaining significant ownership of Granite, and becoming senior Cisco executives.

An email obtained from the Enron Corpus (a database of 600,000 emails acquired by the Federal Energy Regulatory Commission and later released to the public) from Richard O’Neill, inviting Enron executives to participate in the Highlands Forum, shows that Cisco and Granite executives are intimately connected to the Pentagon. The email reveals that in May 2000, Bechtolsheim’s partner and Sun Microsystems co-founder, William Joy — who was then chief scientist and corporate executive officer there — had attended the Forum to discuss nanotechnology and molecular computing.

In 1999, Joy had also co-chaired the President’s Information Technology Advisory Committee, overseeing a report acknowledging that DARPA had:

“… revised its priorities in the 90’s so that all information technology funding was judged in terms of its benefit to the warfighter.”

Throughout the 1990s, then, DARPA’s funding to Stanford, including Google, was explicitly about developing technologies that could augment the Pentagon’s military intelligence operations in war theatres.

The Joy report recommended more federal government funding from the Pentagon, NASA, and other agencies to the IT sector. Greg Papadopoulos, another of Bechtolsheim’s colleagues as then Sun Microsystems chief technology officer, also attended a Pentagon Highlands’ Forum meeting in September 2000.

In November, the Pentagon Highlands Forum hosted Sue Bostrom, who was vice president for the internet at Cisco, sitting on the company’s board alongside Google co-investors Bechtolsheim and Cheriton. The Forum also hosted Lawrence Zuriff, then a managing partner of Granite, which Bechtolsheim and Cheriton had sold to Cisco. Zuriff had previously been an SAIC contractor from 1993 to 1994, working with the Pentagon on national security issues, specifically for Marshall’s Office of Net Assessment. In 1994, both the SAIC and the ONA were, of course, involved in co-establishing the Pentagon Highlands Forum. Among Zuriff’s output during his SAIC tenure was a paper titled ‘Understanding Information War’, delivered at a SAIC-sponsored US Army Roundtable on the Revolution in Military Affairs.

After Google’s incorporation, the company received $25 million in equity funding in 1999 led by Sequoia Capital and Kleiner Perkins Caufield & Byers. According to Homeland Security Today, “A number of Sequoia-bankrolled start-ups have contracted with the Department of Defense, especially after 9/11 when Sequoia’s Mark Kvamme met with Defense Secretary Donald Rumsfeld to discuss the application of emerging technologies to warfighting and intelligence collection.” Similarly, Kleiner Perkins had developed “a close relationship” with In-Q-Tel, the CIA venture capitalist firm that funds start-ups “to advance ‘priority’ technologies of value” to the intelligence community.

John Doerr, who led the Kleiner Perkins investment in Google obtaining a board position, was a major early investor in Becholshtein’s Sun Microsystems at its launch. He and his wife Anne are the main funders behind Rice University’s Center for Engineering Leadership (RCEL), which in 2009 received $16 million from DARPA for its platform-aware-compilation-environment (PACE) ubiquitous computing R&D program. Doerr also has a close relationship with the Obama administration, which he advised shortly after it took power to ramp up Pentagon funding to the tech industry. In 2013, at the Fortune Brainstorm TECH conference, Doerr applauded “how the DoD’s DARPA funded GPS, CAD, most of the major computer science departments, and of course, the Internet.”

From inception, in other words, Google was incubated, nurtured and financed by interests that were directly affiliated or closely aligned with the US military intelligence community: many of whom were embedded in the Pentagon Highlands Forum.

Google captures the Pentagon

In 2003, Google began customizing its search engine under special contract with the CIA for its Intelink Management Office, “overseeing top-secret, secret and sensitive but unclassified intranets for CIA and other IC agencies,” according to Homeland Security Today. That year, CIA funding was also being “quietly” funneled through the National Science Foundation to projects that might help create “new capabilities to combat terrorism through advanced technology.”

The following year, Google bought the firm Keyhole, which had originally been funded by In-Q-Tel. Using Keyhole, Google began developing the advanced satellite mapping software behind Google Earth. Former DARPA director and Highlands Forum co-chair Anita Jones had been on the board of In-Q-Tel at this time, and remains so today.

Then in November 2005, In-Q-Tel issued notices to sell $2.2 million of Google stocks. Google’s relationship with US intelligence was further brought to light when an IT contractor told a closed Washington DC conference of intelligence professionals on a not-for-attribution basis that at least one US intelligence agency was working to “leverage Google’s [user] data monitoring” capability as part of an effort to acquire data of “national security intelligence interest.”

photo on Flickr dated March 2007 reveals that Google research director and AI expert Peter Norvig attended a Pentagon Highlands Forum meeting that year in Carmel, California. Norvig’s intimate connection to the Forum as of that year is also corroborated by his role in guest editing the 2007 Forum reading list.

The photo below shows Norvig in conversation with Lewis Shepherd, who at that time was senior technology officer at the Defense Intelligence Agency, responsible for investigating, approving, and architecting “all new hardware/software systems and acquisitions for the Global Defense Intelligence IT Enterprise,” including “big data technologies.” Shepherd now works at Microsoft. Norvig was a computer research scientist at Stanford University in 1991 before joining Bechtolsheim’s Sun Microsystems as senior scientist until 1994, and going on to head up NASA’s computer science division.

Lewis Shepherd (left), then a senior technology officer at the Pentagon’s Defense Intelligence Agency, talking to Peter Norvig (right), renowned expert in artificial intelligence expert and director of research at Google. This photo is from a Highlands Forum meeting in 2007.

Norvig shows up on O’Neill’s Google Plus profile as one of his close connections. Scoping the rest of O’Neill’s Google Plus connections illustrates that he is directly connected not just to a wide range of Google executives, but also to some of the biggest names in the US tech community.

Those connections include Michele Weslander Quaid, an ex-CIA contractor and former senior Pentagon intelligence official who is now Google’s chief technology officer where she is developing programs to “best fit government agencies’ needs”; Elizabeth Churchill, Google director of user experience; James Kuffner, a humanoid robotics expert who now heads up Google’s robotics division and who introduced the term ‘cloud robotics’; Mark Drapeau, director of innovation engagement for Microsoft’s public sector business; Lili Cheng, general manager of Microsoft’s Future Social Experiences (FUSE) Labs; Jon Udell, Microsoft ‘evangelist’; Cory Ondrejka, vice president of engineering at Facebook; to name just a few.

In 2010, Google signed a multi-billion dollar no-bid contract with the NSA’s sister agency, the National Geospatial-Intelligence Agency (NGA). The contract was to use Google Earth for visualization services for the NGA. Google had developed the software behind Google Earth by purchasing Keyhole from the CIA venture firm In-Q-Tel.

Then a year after, in 2011, another of O’Neill’s Google Plus connections, Michele Quaid — who had served in executive positions at the NGA, National Reconnaissance Office and the Office of the Director of National Intelligence — left her government role to become Google ‘innovation evangelist’ and the point-person for seeking government contracts. Quaid’s last role before her move to Google was as a senior representative of the Director of National Intelligence to the Intelligence, Surveillance, and Reconnaissance Task Force, and a senior advisor to the undersecretary of defense for intelligence’s director of Joint and Coalition Warfighter Support (J&CWS). Both roles involved information operations at their core. Before her Google move, in other words, Quaid worked closely with the Office of the Undersecretary of Defense for Intelligence, to which the Pentagon’s Highlands Forum is subordinate. Quaid has herself attended the Forum, though precisely when and how often I could not confirm.

In March 2012, then DARPA director Regina Dugan — who in that capacity was also co-chair of the Pentagon Highlands Forum — followed her colleague Quaid into Google to lead the company’s new Advanced Technology and Projects Group. During her Pentagon tenure, Dugan led on strategic cyber security and social media, among other initiatives. She was responsible for focusing “an increasing portion” of DARPA’s work “on the investigation of offensive capabilities to address military-specific needs,” securing $500 million of government funding for DARPA cyber research from 2012 to 2017.

Regina Dugan, former head of DARPA and Highlands Forum co-chair, now a senior Google executive — trying her best to look the part

By November 2014, Google’s chief AI and robotics expert James Kuffner was a delegate alongside O’Neill at the Highlands Island Forum 2014 in Singapore, to explore ‘Advancement in Robotics and Artificial Intelligence: Implications for Society, Security and Conflict.’ The event included 26 delegates from Austria, Israel, Japan, Singapore, Sweden, Britain and the US, from both industry and government. Kuffner’s association with the Pentagon, however, began much earlier. In 1997, Kuffner was a researcher during his Stanford PhD for a Pentagon-funded project on networked autonomous mobile robots, sponsored by DARPA and the US Navy.

Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is also a columnist for Middle East Eye. He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work.

Nafeez has also written for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, Counterpunch, Truthout, among others. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel ZERO POINT, among other books. His work on the root causes and covert operations linked to international terrorism officially contributed to the 9/11 Commission and the 7/7 Coroner’s Inquest.

Nafeez is 120% corroborated by Quartz:

A rich history of the governments science funding

There was already a long history of collaboration between America’s best scientists and the intelligence community, from the creation of the atomic bomb and satellite technology to efforts to put a man on the moon.The internet itself was created because of an intelligence effort.

In fact, the internet itself was created because of an intelligence effort: In the 1970s, the agency responsible for developing emerging technologies for military, intelligence, and national security purposes—the Defense Advanced Research Projects Agency (DARPA)—linked four supercomputers to handle massive data transfers. It handed the operations off to the National Science Foundation (NSF) a decade or so later, which proliferated the network across thousands of universities and, eventually, the public, thus creating the architecture and scaffolding of the World Wide Web.

Silicon Valley was no different. By the mid 1990s, the intelligence community was seeding funding to the most promising supercomputing efforts across academia, guiding the creation of efforts to make massive amounts of information useful for both the private sector as well as the intelligence community.

They funded these computer scientists through an unclassified, highly compartmentalized program that was managed for the CIA and the NSA by large military and intelligence contractors. It was called the Massive Digital Data Systems (MDDS) project.

The Massive Digital Data Systems (MDDS) project 

MDDS was introduced to several dozen leading computer scientists at Stanford, CalTech, MIT, Carnegie Mellon, Harvard, and others in a white paper that described what the CIA, NSA, DARPA, and other agencies hoped to achieve. The research would largely be funded and managed by unclassified science agencies like NSF, which would allow the architecture to be scaled up in the private sector if it managed to achieve what the intelligence community hoped for.

“Not only are activities becoming more complex, but changing demands require that the IC [Intelligence Community] process different types as well as larger volumes of data,” the intelligence community said in its 1993 MDDS white paper. “Consequently, the IC is taking a proactive role in stimulating research in the efficient management of massive databases and ensuring that IC requirements can be incorporated or adapted into commercial products. Because the challenges are not unique to any one agency, the Community Management Staff (CMS) has commissioned a Massive Digital Data Systems [MDDS] Working Group to address the needs and to identify and evaluate possible solutions.”

Over the next few years, the program’s stated aim was to provide more than a dozen grants of several million dollars each to advance this research concept. The grants were to be directed largely through the NSF so that the most promising, successful efforts could be captured as intellectual property and form the basis of companies attracting investments from Silicon Valley. This type of public-to-private innovation system helped launch powerful science and technology companies like Qualcomm, Symantec, Netscape, and others, and funded the pivotal research in areas like Doppler radar and fiber optics, which are central to large companies like AccuWeather, Verizon, and AT&T today. Today, the NSF provides nearly 90% of all federal funding for university-based computer-science research.

MIT is but a Pentagon lab

The CIA and NSAs end goal

The research arms of the CIA and NSA hoped that the best computer-science minds in academia could identify what they called “birds of a feather:” Just as geese fly together in large V shapes, or flocks of sparrows make sudden movements together in harmony, they predicted that like-minded groups of humans would move together online. The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing, and the “Birds of a Feather Session on the Intelligence Community Initiative in Massive Digital Data Systems” took place at the Fairmont Hotel in San Jose in the spring of 1995.The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing.

Their research aim was to track digital fingerprints inside the rapidly expanding global information network, which was then known as the World Wide Web. Could an entire world of digital information be organized so that the requests humans made inside such a network be tracked and sorted? Could their queries be linked and ranked in order of importance? Could “birds of a feather” be identified inside this sea of information so that communities and groups could be tracked in an organized way?

By working with emerging commercial-data companies, their intent was to track like-minded groups of people across the internet and identify them from the digital fingerprints they left behind, much like forensic scientists use fingerprint smudges to identify criminals. Just as “birds of a feather flock together,” they predicted that potential terrorists would communicate with each other in this new global, connected world—and they could find them by identifying patterns in this massive amount of new information. Once these groups were identified, they could then follow their digital trails everywhere.

Sergey Brin and Larry Page, computer-science boy wonders 

In 1995, one of the first and most promising MDDS grants went to a computer-science research team at Stanford University with a decade-long history of working with NSF and DARPA grants. The primary objective of this grant was “query optimization of very complex queries that are described using the ‘query flocks’ approach.” A second grant—the DARPA-NSF grant most closely associated with Google’s origin—was part of a coordinated effort to build a massive digital library using the internet as its backbone. Both grants funded research by two graduate students who were making rapid advances in web-page ranking, as well as tracking (and making sense of) user queries: future Google cofounders Sergey Brin and Larry Page.

The research by Brin and Page under these grants became the heart of Google: people using search functions to find precisely what they wanted inside a very large data set. The intelligence community, however, saw a slightly different benefit in their research: Could the network be organized so efficiently that individual users could be uniquely identified and tracked?

This process is perfectly suited for the purposes of counter-terrorism and homeland security efforts: Human beings and like-minded groups who might pose a threat to national security can be uniquely identified online before they do harm. This explains why the intelligence community found Brin’s and Page’s research efforts so appealing; prior to this time, the CIA largely used human intelligence efforts in the field to identify people and groups that might pose threats. The ability to track them virtually (in conjunction with efforts in the field) would change everything.

It was the beginning of what in just a few years’ time would become Google. The two intelligence-community managers charged with leading the program met regularly with Brin as his research progressed, and he was an author on several other research papers that resulted from this MDDS grant before he and Page left to form Google.

The grants allowed Brin and Page to do their work and contributed to their breakthroughs in web-page ranking and tracking user queries. Brin didn’t work for the intelligence community—or for anyone else. Google had not yet been incorporated. He was just a Stanford researcher taking advantage of the grant provided by the NSA and CIA through the unclassified MDDS program.

Left out of Googles story

The MDDS research effort has never been part of Google’s origin story, even though the principal investigator for the MDDS grant specifically named Google as directly resulting from their research: “Its core technology, which allows it to find pages far more accurately than other search engines, was partially supported by this grant,” he wrote. In a published research paper that includes some of Brin’s pivotal work, the authors also reference the NSF grant that was created by the MDDS program.

Instead, every Google creation story only mentions just one federal grant: the NSF/DARPA “digital libraries” grant, which was designed to allow Stanford researchers to search the entire World Wide Web stored on the university’s servers at the time. “The development of the Google algorithms was carried on a variety of computers, mainly provided by the NSF-DARPA-NASA-funded Digital Library project at Stanford,” Stanford’s Infolab says of its origin, for example. NSF likewise only references the digital libraries grant, not the MDDS grant as well, in its own history of Google’s origin. In the famous research paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” which describes the creation of Google, Brin and Page thanked the NSF and DARPA for its digital library grant to Stanford. But the grant from the intelligence community’s MDDS program—specifically designed for the breakthrough that Google was built upon—has faded into obscurity.

Google has said in the past that it was not funded or created by the CIA. For instance, when stories circulated in 2006 that Google had received funding from the intelligence community for years to assist in counter-terrorism efforts, the company told Wired magazine founder John Battelle, “The statements related to Google are completely untrue.”

Did the CIA directly fund the work of Brin and Page, and therefore create Google? No. But were Brin and Page researching precisely what the NSA, the CIA, and the intelligence community hoped for, assisted by their grants? Absolutely.The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur something that looks almost exactly like Google.

To understand this significance, you have to consider what the intelligence community was trying to achieve as it seeded grants to the best computer-science minds in academia: The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur the development of something that looks almost exactly like Google. Brin’s breakthrough research on page ranking by tracking user queries and linking them to the many searches conducted—essentially identifying “birds of a feather”—was largely the aim of the intelligence community’s MDDS program. And Google succeeded beyond their wildest dreams.

The intelligence communitys enduring legacy within Silicon Valley

Digital privacy concerns over the intersection between the intelligence community and commercial technology giants have grown in recent years. But most people still don’t understand the degree to which the intelligence community relies on the world’s biggest science and tech companies for its counter-terrorism and national-security work.

Civil-liberty advocacy groups have aired their privacy concerns for years, especially as they now relate to the Patriot Act. “Hastily passed 45 days after 9/11 in the name of national security, the Patriot Act was the first of many changes to surveillance laws that made it easier for the government to spy on ordinary Americans by expanding the authority to monitor phone and email communications, collect bank and credit reporting records, and track the activity of innocent Americans on the Internet,” says the ACLU. “While most Americans think it was created to catch terrorists, the Patriot Act actually turns regular citizens into suspects.”

When asked, the biggest technology and communications companies—from Verizon and AT&T to Google, Facebook, and Microsoft—say that they never deliberately and proactively offer up their vast databases on their customers to federal security and law enforcement agencies: They say that they only respond to subpoenas or requests that are filed properly under the terms of the Patriot Act.

But even a cursory glance through recent public records shows that there is a treadmill of constant requests that could undermine the intent behind this privacy promise. According to the data-request records that the companies make available to the public, in the most recent reporting period between 2016 and 2017, local, state and federal government authorities seeking information related to national security, counter-terrorism or criminal concerns issued more than 260,000 subpoenas, court orders, warrants, and other legal requests to Verizon, more than 250,000 such requests to AT&T, and nearly 24,000 subpoenas, search warrants, or court orders to Google. Direct national security or counter-terrorism requests are a small fraction of this overall group of requests, but the Patriot Act legal process has now become so routinized that the companies each have a group of employees who simply take care of the stream of requests.

In this way, the collaboration between the intelligence community and big, commercial science and tech companies has been wildly successful. When national security agencies need to identify and track people and groups, they know where to turn – and do so frequently. That was the goal in the beginning. It has succeeded perhaps more than anyone could have imagined at the time.

FFW to 2020

From DARPA to Google: How the Military Kickstarted AV Development

 27 Feb 2020

FromDarpatoGoogle

The Stanford Racing Team

by Arrow Mag, Feb 2020

Sebastian Thrun was entertaining the idea of self-driving cars for many years. Born and raised in Germany, he was fascinated with the power and performance of German cars. Things changed in 1986, when he was 18, when his best friend died in a car crash because the driver, another friend, was going too fast on his new Audi Quattro.

As a student at the University of Bonn, Thrun developed several autonomous robotic systems that earned him international recognition. At the time, Thrun was convinced that self-driving cars would soon make transportation safer, avoiding crashes like the one that took his friend’s life.

In 1998, he became an assistant professor and co-director of the Robot Learning Laboratory at Carnegie Mellon University. In July 2003, Thrun left Carnegie Mellon for Stanford University, soon after the first DARPA Grand Challenge was announced. Before accepting the new position, he asked Red Whittaker, the leader of the CMU robotics department, to join the team developing the vehicle for the DARPA race. Whittaker declined. After moving to California, Thrun joined the Stanford Racing Team.

On Oct. 8, 2005, the Stanford Racing Team won $2 million for being the first team to complete the 132-mile DARPA Grand Challenge course in California’s Mojave Desert. Their robot car, “Stanley,” finished in just under 6 hours and 54 minutes and averaged over 19 mph on the course.

Google’s Page wanted to develop self-driving cars

Two years after the third Grand Challenge, Google co-founder Larry Page called Thrun, wanting to turn the experience of the DARPA races into a product for the masses.

When Page first approached Thrun about building a self-driving car that people could use on the real roads, Thrun told him it couldn’t be done.

But Page had a vision, and he would not abandon his quest for an autonomous vehicle.

Thrun recalled that a short time later, Page came back to him and said, “OK, you say it can’t be done. You’re the expert. I trust you. So I can explain to Sergey [Brin] why it can’t be done, can you give me a technical reason why it can’t be done?”

Finally, Thrun accepted Page’s offer and, in 2009, started Project Chauffeur, which began as the Google self-driving car project.

The Google 101,000-Mile Challenge

To develop the technology for Google’s self-driving car, Thrun called Urmson and offered him the position of chief technical officer of the project.

To encourage the team to build a vehicle, and its systems, to drive on any public road, Page created two challenges, with big cash rewards for the entire team: a 1,000-mile challenge to show that Project Chauffeur’s car could drive in several situations, including highways and the streets of San Francisco, and another 100,000-mile challenge to show that driverless cars could be a reality in a few years.

By the middle of 2011, Project Chauffeur engineers completed the two challenges.

In 2016, the Google self-driving car project became Waymo, a “spinoff under Alphabet as a self-driving technology company with a mission to make it safe and easy for people and things to move around.”

Urmson led Google’s self-driving car project for nearly eight years. Under his leadership, Google vehicles accumulated 1.8 million miles of test driving.

In 2018, Waymo One, the first fully self-driving vehicle taxi service, began in Phoenix, Arizona.

From Waymo to Aurora

In 2016, after finishing development of the production-ready version of Waymo’s self-driving technology, Urmson left Google to start Aurora Innovation, a startup backed by Amazon, aiming to provide the full-stack solution for self-driving vehicles.

Urmson believes that in 20 years, we’ll see much of the transportation infrastructure move over to automation. – Arrow.com

TO BE CONTINUED

Here’s a peek into the next episode:

Facebook Hired a Former DARPA Head To Lead An Ambitious New Research Lab

Source: TIME | by VICTOR LUCKERSON

If you need another sign that Facebook’s world-dominating ambitions are just getting started, here’s one: the Menlo Park, Calif. company has hired a former DARPA chief to lead its new research lab.

Facebook CEO Mark Zuckerberg announced April 14 that Regina Dugan will guide Building 8, a new research group developing hardware projects that advance the company’s efforts in virtual reality, augmented reality, artificial intelligence and global connectivity.

Dugan served as the head of the Pentagon’s Defense Advanced Research Projects Agency from 2009 and 2012. Most recently, she led Google’s Advanced Technology and Projects Lab, a highly experimental arm of the company responsible for developing new hardware and software products on a strict two-year timetable.

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

Valve, the company behind Half Life and Counter-Strike, has just announced that the video games giant is ushering humanity into a Brave New World. How so? By merely including new technologies called brain-computer interfaces in its games.
Please read below an great brief report from The Organic Prepper, followed by a few of my own comments:

Brain-Computer Interfaces: Don’t Worry, It’s Just a “Game”

by Robert Wheeler

BCIs will work on our feelings by adjusting the game accordingly

The head of Valve, Gabe Newell, has stated that the future of video games will involve “Brain-computer interfaces.” Newell added that BCIs would soon create superior experiences to those we currently perceive through our eyes and ears. 

Newell said he envisions the gaming devices detecting a gamer’s emotions and then adjusting the settings to modify the player’s mood. For example, increasing the difficulty level when the player is getting bored.

Valve is currently developing its own BCIs and working on “modified VR head straps” that developers can use to experiment with signals coming from the brain. “If you’re a software developer in 2022 who doesn’t have one of these in your test lab, you’re making a silly mistake,” Newell said.

VR headsets will collect data by reading our brain signals

Valve is working with OpenBCI headsets. OpenBCI unveiled a headset design back in November that it calls Galea. It is designed to work alongside VR headsets like Valve’s Index.

“We’re working on an open-source project so that everybody can have high-resolution [brain signal] read technologies built into headsets, in a bunch of different modalities,” Newell added.

“Software developers for interactive experience[s] — you’ll be absolutely using one of these modified VR head straps to be doing that routinely — simply because there’s too much useful data,” said Newell.

The data collected by the head straps would consist of readings from the players’ brains and bodies. The data would essentially tell if the player is excited, surprised, bored, sad, afraid, or amused and other emotions. The modified head strap will then use the information to improve “immersion and personalize what happens during games.”

The world will seem flat and colorless in comparison to the one created in your mind

Newell also discussed taking the brain-reading technology a step further and creating a situation to send signals to people’s minds. (Such as changing their feelings and delivering better visuals during games.)

“You’re used to experiencing the world through eyes,” Newell said, “but eyes were created by this low-cost bidder that didn’t care about failure rates and RMAs, and if it got broken, there was no way to repair anything effectively, which totally makes sense from an evolutionary perspective, but is not at all reflective of consumer preferences.”

“So the visual experience, the visual fidelity we’ll be able to create — the real world will stop being the metric that we apply to the best possible visual fidelity.

“Where it gets weird is when who you are becomes editable through a BCI.” ~ Gabe Newell

Typically your average human accepts their feelings to be how they truly feel. Newell claims that BCIs will allow for an edit of these feelings digitally.

“One of the early applications I expect we’ll see is improved sleep — sleep will become an app that you run where you say, ‘Oh, I need this much sleep, I need this much REM,’” he said.

Newell also claims that another benefit could be the reduction or complete removal of unwanted feelings or brain conditions.

Doesn’t something good come from this technology?

Newell and Valve are working on something beyond merely the improvement of the video game experience. There is now a significant bleed over in the research conducted by Newell’s team and the prosthetics and neuroscience industries.

Valve is trading research for expertise, contributing to projects developing synthetic body parts.

“This is what we’re contributing to this particular research project,” he said, “and because of that, we get access to leaders in the neuroscience field who teach us a lot about the neuroscience side.”

Are we equipped to experience things we have never experienced?

Newell briefly mentioned some potential negatives to the technology. For example, he said how BCIs could cause people to experience physical pain, even pain beyond their physical body.

“You could make people think they [are] hurt by injuring their tool, which is a complicated topic in and of itself,” he said.

From the TVNZ article:

Game developers might harness that function to make a player feel the pain of the character they are playing as when they are injured — perhaps to a lesser degree.

Like any other form of technology, Newell says there’s a degree of trust in using it and that not everyone will feel comfortable with connecting their brain to a computer.

He says no one will be forced to do anything they don’t want to do, and that people will likely follow others if they have good experiences, likening BCI technology to cellular phones.

“People are going to decide for themselves if they want to do it. Nobody makes people use a phone,” Newell said.

“I’m not saying that everybody is going to love and insist that they have a brain-computer interface. I’m just saying each person is going to decide for themselves whether or not there’s an interesting combination of feature, functionality, and price.”

But Newell warned that BCIs come with one other significant risk. He says, “Nobody wants to say, ‘Remember Bob? Remember when Bob got hacked by the Russian malware? Yeah, that sucked. Is he still running naked through the forests?’”

Is this just another step in separating us from ourselves?

The truth is we will continue to be told to ignore the implications for this type of technology and the direction in which we are heading. Because, of course, they ARE developing prosthetics, and this is an advance in scientific discovery. Still, one step forward by an agenda and a plan created long ago only brings us that much closer to losing our ability to remember.  – The Organic Prepper

As for the Silview.media contribution to this report, I only have two things for you to chew on, but I think they can keep your mind busy for a very long time:


1. What if this technology can be made to work both ways and adjust your feelings to the experience?

2. What if this technology can be upscaled to the Internet of All Things and your life experience in “intelligent cities”?

3. Please enter “DARPA” in our Search utility and see how that plays out with 1. & 2.

After that, I’m gonna drop the mic with this:

OpenBCI Launches New, Hackable Brain Computer Interface

OpenBCI Launches New, Hackable Brain Computer Interface

By David Scheltema

Connor Russomanno and Joel Murphy
Connor Russomanno and Joel Murphy showoff their Editor’s Choice Blue Ribbon during World Maker Faire 2015

For several years, Connor Russomanno and Joel Murphy have been designing brain-computer interfaces (BCIs) as part of their company, OpenBCI. It’s a tricky proposition; subtle brain waves can be measured, but it’s difficult to read them and even more difficult to control them. So for its latest device, the team launched a crowdfunding campaign for the BCI Ganglion, a sub-$100 device to measure brain, muscle, and heart activity. (Tracking muscles in addition to electrical signals from the scalp increases accuracy.)

They also announced the Ultracortex Mark IV​, a 3D printable headset designed to hold electrodes for electrical measurements by the Ganglion. Unlike existing devices that accomplish similar data acquisition, the Ganglion and Ultracortex Mark IV are open source (hardware and software), supported by an active user community, and lower in cost by thousands of dollars.

This means whether you want to record brainwaves for research purposes or create a brain-computer interface between five friends and a flying shark, it is possible and even affordable.

In one particularly far-out project, the TransAtlantic Biodata Communication hackathon, one person wired with OpenBCI was able to control a second person also wearing the device — even on opposite sides of the ocean.

OpenBCI's Processing application showing brainwave activity
OpenBCI’s Processing application showing brainwave activity (via OpenBCI)

But whether it’s wacky experiments, practical home projects, or academic research, the Ganglion offers a number of tools and sensors for various applications.

Specifications

  • 4 channel biosensors
  • 128, 256, 512 and 1024 sample rates
  • Used for EEG, EMG, or ECG
  • Wireless BLE connection with Simblee, an Arduino compatible BLE radio module
  • SD card slot for local storage
  • Accelerometer
  • Connects wirelessly to the OpenBCI Processing sketch

The Ultracortex Mark IV is not ready at launch; the headset is currently in the concept stage of development. But not to worry, previous headsets from OpenBCI are compatible with the new Ganglion. Here are the design specifications the team is working on:

  • Simplified assembly
  • Higher node count (especially above the motor cortex & the visual cortex)
  • Increased comfort

How the Ganglion works

Interfacing the human brain with computers is all about monitoring electrical activity. The Ultracortex Mark IV holds electrodes against your head and they are wired to the Ganglion. The Ganglion monitors the electrical activity of neurons in the brain at each electrode — also known as brainwaves.

From a computing perspective, the brainwaves constitute series of analog values, which the Ganglion samples and converts to digital values. This conversion is done using a specialized chip on the Ganglion known as an analog-to-digital converter (ADC). ADC chips are common in all sorts of electronics, not just BCI devices. If you have used an Arduino to read an analog sensor values, then you have used an ADC.

The Ganglion board mounted in the Mark IV headset. Exploding out of the Mark IV are the electrode nodes.
The Ganglion board mounted in the Mark IV headset. Exploding out of the Mark IV are the electrode nodes.

While the ADC chip OpenBCI used in the past was extremely powerful, it accounts for much of the cost of the device. The predecessor to the Ganglion, the OpenBCI 32-bit board, used a robust Texas Instruments ADS1299 which cost a whopping $36 per unit at quantity and $58 in low volume. While the ADS1299 chip is fantastic for sampling, it was way more advanced and expensive than most people want. When Russomanno and Murphy set out to lower the cost of their BCI device, the first thing they did was find a cheaper ADC. They were able to swap out the $36 chip with a much more affordable $6 ADC.

Cutting their cost for their last BCI board by nearly $400, the OpenBCI team is pushing the expectations for high-quality, low cost science devices. Asked what defines a successfully crowdfunding campaign apart from reaching a financial goal, Russomanno explains: “It is lowering the barrier to entry” and “getting the entire OpenBCI platform so it’s approachable by a passionate high schooler or undergraduate.”

OpenBCI attached to a Mark 3 headset
The older OpenBCI 32-bit attached to a Mark III headset

I hope the word “hackable” from the headline above stuck with you.

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

Sometimes my memes are 3D. And you can own them. Or send them to someone.
You can even eat some of them.
CLICK HERE

You may have heard of the famous Brain Mapping initiative by US Government / Pentagon / Darpa. It’s been widely publicized as a version of the Human Genome project, meant to bring countless health benefits. But both are proving lately to be falsely advertised.

2013

In 2018, an US journalist made a FOIA application regarding Antifa / BLM and received a surprise bonus, which went semi-viral and faded soon.
I’m digging it up again for a new autopsy required in the light of the latest revelations regarding DARPA, The Brain Initiative and others. If you’re new to this site, I can’t recommend enough using the search engine to find our articles on DARPA, biohacking and mRNA technology.

And here’s the original 2018 article, that makes much more sense when you have the knowledge I pointed at just before.

Washington State Fusion Center accidentally releases records on remote mind control


Written by Curtis Waltman for Muckrock Magazine, April 18, 2018

As part of a request for records on Antifa and white supremacist groups, WSFC inadvertently bundles in “EM effects on human body.zip”

When you send thousands of FOIA requests, you are bound to get some very weird responses from time to time. Recently, we here at MuckRock had one of our most bizarre gets yet – Washington State Fusion Center’s accidental release of records on the effects of remote mind control.

As part of my ongoing project looking at fusion centers’ investigations into Antifa and various white supremacist groups, I filed a request with the WSFC. I got back many standard documents in response, including emails, intelligence briefings and bulletins, reposts from other fusion centers – and then there was one file titled “EM effects on human body.zip.”

Hmmm. What could that be? What does EM stand for and what is it doing to the human body? So I opened it up and took a look:

When I first saw this on Internet I wasn’t impressed much either, seemed orphan. Now you have the context.

Hell yeah, dude.

EM stands for electromagnetic. What you are looking at here is “psycho-electronic” weapons that purportedly use electromagnetism to do a wide variety of horrible things to people, such as reading or writing your mind, causing intense pain, “rigor mortis,” or most heinous of all, itching.

Now to be clear, the presence of these records (which were not created by the fusion center, and are not government documents) should not be seen as evidence that DHS possesses these devices, or even that such devices actually exist. Which is kind of unfortunate because “microwave hearing” is a pretty cool line of technobabble to say out loud.

You know what’s even cooler? “Remote Brain Mapping.” It is insanely cool to say. Go ahead. Say it. Remote. Brain. Mapping.

Just check the detail on these slides too. The black helicopter shooting off its psychotronic weapons, mapping your brain, broadcasting your thoughts back to some fusion center. I wish their example of “ELF Brain stimulation” was a little clearer though.

It’s difficult to source exactly where these images come from, but it’s obviously not government material. One seems to come from a person named “Supratik Saha,” who is identified as a software engineer, the brain mapping slide has no sourcing, and the image of the body being assaulted by psychotronic weapons is sourced from raven1.net, who apparently didn’t renew their domain.

It’s entirely unclear how this ended up in this release. It could have been meant for another release, it could have been gathered for an upcoming WSFC report, or it could even be from the personal files of an intelligence officer that somehow got mixed up in the release. A call to the WSFC went unreturned as of press time, so until we hear back, their presence remains a mystery.

We’ll keep you updated once we hear back, and you can download the files yourself on the request page. – Muckrock Magazine

I don’t know why it’s so hard for the author to link mind control to security threats control, which is in government’s job description as it is in our personal agenda, but then again, microwave hearing is “technobabble” to him.

I find it much more startling that REMOTE brain mapping is a thing!

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

We gave up on our profit shares from masks, if you want to help us, please use the donation button!
We think frequent mask use, even short term use can be bad for you, but if you have no way around them, at least send a message of consciousness.
Get it here!

If you’re familiar with our reports, George Church is no stranger to you either. He’s a founder figure for the Human Genome Project, CRISPR and The BRAIN Initiative. But he’s totally not getting the deserved attention, seeing that he’s just turned our world upside down. Not by himself, of course.

Remember when Fauci and Big Tech joined efforts to keep us in the dark in regards to the mRNA impact on our genetics and DNA?


We’ve shown that there’s an entire new field of science that does just that: argues what Fauci said using RNA to reprogram DNA.
But we’ve just reached a deeper level of the rabbit hole that we didn’t even know it’s there already. It’s been there for a while. As in 2020 minus “three years of stealth operations”. If you read carefully below, it will all make much more sense.

George M. Church biography as per Harvard website

Professor at Harvard & MIT, co-author of 580 papers, 143 patent publications & the book “Regenesis”; developed methods used for the first genome sequence (1994) & million-fold cost reductions since (via fluor-NGS & nanopores), plus barcoding, DNA assembly from chips, genome editing, writing & recoding; co-initiated BRAIN Initiative (2011) & Genome Projects (GP-Read-1984, GP-Write-2016, PGP-2005:world’s open-access personal precision medicine datasets); machine learning for protein engineering, tissue reprogramming, organoids, xeno-transplantation, in situ 3D DNA, RNA, protein imaging.

SEE MORE

George Church is Professor of Genetics at Harvard Medical School and Director of  PersonalGenomes.org, which provides the world’s only open-access information on human Genomic, Environmental & Trait data (GET). His 1984 Harvard PhD included the first methods for direct genome sequencing, molecular multiplexing & barcoding. These led to the first genome sequence (pathogen, Helicobacter pylori) in  1994 . His innovations have contributed to nearly all “next generation” DNA sequencing methods and companies (CGI-BGI, Life, Illumina, Nanopore). This plus his lab’s work on chip-DNA-synthesis, gene editing and stem cell engineering resulted in founding additional application-based companies spanning fields of medical diagnostics ( Knome/PierianDxAlacrisAbVitro/JunoGenosVeritas Genetics ) & synthetic biology / therapeutics ( JouleGen9EditasEgenesisenEvolvWarpDrive ). He has also pioneered new privacybiosafetyELSIenvironmental & biosecurity policies. He is director of an IARPA BRAIN Project and NIH Center for Excellence in Genomic Science. His honors include election to NAS & NAE & Franklin Bower Laureate for Achievement in Science. He has coauthored 537 papers156 patent publications & one book (Regenesis).

THIS IS BGI
THIS IS ILLUMINA

PhD students from (* = main training programs for our group):
Harvard University: Biophysics* , BBS* , MCB , ChemBio* , SystemsBio* , Virology
MIT: HST*ChemistryEE/CSPhysicsMath.
Boston Universty: BioinformaticsBiomedical Engineering
Cambridge University, UK: Genetics

PublicationsCVs-resumesLab members , Co-author netELSI
Technology transfer & Commercial Scientific Advisory Roles
Personal info — News — Awards — Grant proposals
Director of Research Centers: DOE-Biotechnologies (1987), NIH-CEGS (2004), PGP (2005), Lipper Center for Computational Genetics (1998), Wyss Inst. Synthetic Biology (2009). Other centers: Regenesis Inst. (2017), SIAT Genome Engineering (2019), Space Genetics (2016), WICGR, Broad Inst. (1990), MIT Media Lab (2014)

Updated: 15-Jan-02021

The BRAIN initiative[edit]

He was part of a team of six[80] who, in a 2012 scientific commentary, proposed a Brain Activity Map, later named BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies).[81] They outlined specific experimental techniques that might be used to achieve what they termed a “functional connectome“, as well as new technologies that will have to be developed in the course of the project,[80] including wireless, minimally invasive methods to detect and manipulate neuronal activity, either utilizing microelectronics or synthetic biology. In one such proposed method, enzymatically produced DNA would serve as a “ticker tape record” of neuronal activity.Wikipedia

SEE THE NAZI ORIGINS OF WYSS HERE

Wyss Institute Will Lead IARPA-Funded Brain Mapping Consortium

January 26, 2016

(BOSTON) — The Wyss Institute for Biologically Inspired Engineering at Harvard University today announced a cross-institutional consortium to map the brain’s neural circuits with unprecedented fidelity. The consortium is made possible by a $21 million contract from the Intelligence Advanced Research Projects Activity (IARPA) and aims to discover the brain’s learning rules and synaptic ‘circuit design’, further helping to advance neurally-derived machine learning algorithms.

The consortium will leverage the Wyss Institute’s FISSEQ (fluorescent in-situ sequencing) method to push forward neuronal connectomics, the science of identifying the neuronal cells that work together to bring about specific brain functions. FISSEQ was developed in 2014 by the Wyss Core Faculty member George Church and colleagues and, unlike traditional sequencing technologies, it provides a method to pinpoint the precise locations of specific RNA molecules in intact tissue. The consortium will harness this FISSEQ capability to accurately trace the complete set of neuronal cells and their connecting processes in intact brain tissue over long distances, which is currently difficult to do with other methods.

Awarded a competitive IARPA MICrONS contract, the consortium will further the overall goals of President Obama’s BRAIN initiative, which aims to improve the understanding of the human mind and uncover new ways to treat neuropathological disorders like Alzheimer’s disease, schizophrenia, autism and epilepsy. The consortium’s work will fundamentally innovate the technological framework used to decipher the principal circuits neurons use to communicate and fulfill specific brain functions. The learnings can be applied to enhance artificial intelligence in different areas of machine learning such as fraud detection, pattern and image recognition, and self-driving car decision making.

See how the Wyss-developed FISSEQ technology is able to capture the location of individual RNA molecules within cells, which will allow the reconstruction of neuronal networks in the 3-dimensional space of intact brain tissue. Credit: Wyss Institute at Harvard University

“Historically, the mapping of neuronal paths and circuits in the brain has required brain tissue to be sectioned and visualized by electron microscopy. Complete neurons and circuits are then reconstructed by aligning the individual electron microsope images, this process is costly and inaccurate due to use of only one color (grey),” said Church, who is the Principal Investigator for the IARPA MICrONs consortium. “We are taking an entirely new approach to neuronal connectomics_immensely colorful barcodes_that should overcome this obstacle; and by integrating molecular and physiological information we are looking to render a high-definition map of neuronal circuits dedicated first to specific sensations, and in the future to behaviors and cognitive tasks.”

Church is Professor of Genetics at Harvard Medical School, and Professor of Health Sciences and Technology at Harvard and MIT.

To map neural connections, the consortium will genetically engineer mice so that each neuron is barcoded throughout its entire structure with a unique RNA sequence, a technique called BOINC (Barcoding of Individual Neuronal Connections) developed by Anthony Zador at Cold Spring Harbor Laboratory. Thus a complete map representing the precise location, shape and connections of all neurons can be generated.

The key to visualizing this complex map will be FISSEQ, which is able to sequence the total complement of barcodes and pinpoint their exact locations using a super-resolution microscope. Importantly, since FISSEQ analysis can be applied to intact brain tissue, the error-prone brain-sectioning procedure that is part of common mapping studies can be avoided and long neuronal processes can be more accurately traced in larger numbers and at a faster pace.

In addition, the scientists will provide the barcoded mice with a sensory stimulus, such as a flash of light, to highlight and glean the circuits corresponding to that stimulus within the much more complex neuronal map. An improved understanding of how neuronal circuits are composed and how they function over longer distances will ultimately allow the team to build new models for machine learning.

The multi-disciplinary consortium spans 6 institutions. In addition to Church, the Wyss Institute’s effort will be led by Samuel Inverso, Ph.D., who is a Staff Software Engineer and Co-investigator of the project. Complementing the Wyss team, are co-Principal Investigators Anthony Zador, Ph.D., Alexei Koulakov, Ph.D., and Jay Lee, Ph.D., at Cold Spring Harbor Laboratory. Adam Marblestone, Ph.D., and Liam Paninski, Ph.D. are co-Investigator at MIT and co-Principal Investigator at Columbia University, respectively. The Harvard-led consortium is partnering with another MICrONS team led by Tai Sing Lee, Ph.D. of Carnegie Mellon University as Principal investigator under a separate multi-million contract, with Sandra Kuhlman, Ph.D. of Carnegie Mellon University and Alan Yuille, Ph.D. of Johns Hopkins University as co-Principal investigators, to develop computational models of the neural circuits and a new generation of machine learning algorithms by studying the behaviors of a large population of neurons in behaving animals, as well as the circuitry of the these neurons revealed by the innovative methods developed by the consortium.

“It is very exciting to see how technology developed at the Wyss Institute is now becoming instrumental in showing how specific brain functions are wired into the neuronal architecture. The methodology implemented by this research can change the trajectory of brain mapping world wide,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences. – WYSS Institute

IARPA is CIA’s DARPA.
DARPA IS RAN BY PENTAGON AND IARPA BY CIA.
IARPA IS EVEN MORE SECRETIVE, DARING AND SOCIOPATHIC.

Machine Intelligence from Cortical Networks (MICrONS)

Intelligence Advanced Research Projects Activity (IARPA)

Brain Research through Advancing Innovative Neurotechnologies. (BRAIN)

Background
The science behind Obama’s BRAIN project. (BrainFacts, 15Apr-2013 | Jean-François Gariépy)
Wyss Institute Will Lead IARPA-Funded Brain Mapping Consortium (Wyss, 26-Jan-2016 |)
Project Aims to Reverse-engineer Brain Algorithms, Make Computers Learn Like Humans (Scientific Computing, 4-Feb-2016 | Byron Spice)
The U.S. Government Launches a $100-Million “Apollo Project of the Brain” (Scientific American, 8-Mar-2016 | Jordana Cepelewicz)

Grant Proposal
Tasks 2 & 3 PDF Harvard, Wyss, CSHL, MIT.
Task 1. CMU.


Molecular TickertapeRelated Projects:

Full Rosetta brains in situ
A. Activity (MICrONS = Ca imaging) (Alternative=Tickertape, see figure to right)
B. Behavior (MICrONS & Alt = traditional video)
C. Connectome (MICrONS & Alt = BOINC via Cas9-barcode)
D. Developmental Lineage (via Cas9-barcode)
E. Expression (RNA & Protein via FISSEQ)

Building brain components, circuits and organoids.
Busskamp V, Lewis NE, Guye P, Ng AHM, Shipman S, Byrne SS, Sanjana NE, Li Y, Weiss R, Church GM (2014)
Rapid neurogenesis through transcriptional activation in human stem cells. Molecular Systems Biology MSB 10:760:1-21

SOURCE

Flagship Pioneering’s Scientists Invent a New Category of Genome Engineering Technology: Gene Writing

Tessera Therapeutics emerges from three years of stealth operations to pioneer Gene Writing™ as a new genome engineering technology and category of genetic medicine

(PRNewsfoto/Flagship Pioneering)

NEWS PROVIDED BY Flagship Pioneering 

Jul 07, 2020, 08:00 ET


CAMBRIDGE, Mass., July 7, 2020 /PRNewswire/ — Flagship Pioneering today announced the unveiling of Tessera Therapeutics, Inc. a new company with the mission of curing disease by writing in the code of life. Tessera is pioneering Gene Writing™, a new biotechnology that writes therapeutic messages into the genome to treat diseases at their source.

Tessera’s Gene Writing platform is a potentially revolutionary breakthrough for genetic medicine that addresses key limitations of gene therapy and gene editing. Gene Writing technology can alter the genome by efficiently inserting genes and exons (parts of genes), introducing small insertions and deletions, or changing single or multiple DNA base pairs. The technology could enable cures for diseases that arise from errors in the genome, including monogenic disorders. It could also allow precise gene regulation in other diseases such as neurodegenerative diseases, autoimmune disorders, and metabolic diseases.

“While profound advancements in genetic medicine over the last two decades had therapeutic promise for many previously untreatable diseases, the intrinsic properties of existing gene therapy and editing have significant shortcomings that limit their benefits to patients,” says Noubar Afeyan, Ph.D., founder and CEO of Flagship Pioneering and Chairman of Tessera Therapeutics. “Our scientists have invented a new technology, called Gene Writing, that has the ability to write therapeutic messages into the genomes of somatic cells. We created Tessera to pioneer its applications for medicine. However, the breakthrough is broad and could be applied to many different genomes from humans to plants to microorganisms.”

A New Era of Genetic Medicine

Geoffrey von Maltzahn, Ph.D., an MIT-trained biological engineer; Jacob Rubens, Ph.D., an MIT-trained synthetic biologist; and other scientists at Flagship Labs, the enterprise’s innovation foundry, co-founded Tessera in 2018 to create a platform that could design, make, and launch Gene Writing medicines. A General Partner at Flagship Pioneering, von Maltzahn has co-founded numerous biotechnology companies, including Sana Biotechnology, Indigo Agriculture, Kaleido Biosciences, Seres Therapeutics, and Axcella Health.

“DNA codes for life. But sometimes our DNA is written improperly, driving an enormous variety of diseases,” says von Maltzahn, Tessera’s Chief Executive Officer. “We started Tessera Therapeutics with a simple question: ‘What if Nature evolved a better solution than CRISPR for inserting curative therapeutic messages into the genome?’ It turns out that engineered and synthetic mobile genetic elements offer the potential to go beyond the limitations of gene editing technologies and allow Gene Writing. Our outstanding team of scientists is focused on bringing the vast promise of this new technology category to patients.”

Mobile genetic elements, the inspiration for Gene Writing, are evolution’s greatest genomic architect. The first mobile genetic element was discovered by Barbara McClintock, who won the 1983 Nobel Prize for revealing the mobile nature of genes. Mobile genetic elements code for the machinery to move or copy themselves into a new location in the genome, and they have been selected over billions of years to autonomously and efficiently “write” their DNA into new genomic sites. Today, mobile genetic elements are among the most abundant and ubiquitous genes in nature.

Over the past two years, Tessera has been mining genomes to discover novel mobile genetic elements and engineering them to create Gene Writing technology.

Tessera’s Gene Writers write therapeutic messages into the genome using RNA or DNA templates. RNA-based Gene Writing uses an RNA template and Gene Writer protein to either write a new gene into the genome or guide the rewriting of a pre-existing genomic sequence to make a small substitution, insertion, or deletion. DNA-based Gene Writing uses a DNA template to write a new gene into the genome.

By harnessing the biology of mobile genetic elements, Gene Writing holds the potential to overcome the limitations of current genetic medicine approaches by:

  • Efficiently writing small and large alterations to the genome of somatic cells with minimal reliance upon host DNA repair pathways, unlike nuclease-based gene editing technologies.
  • Permanently adding new DNA to dividing cells, unlike AAV-based gene therapy technologies.
  • Writing new DNA sequences into the genome by delivering only RNA.
  • Allowing repeated administration of treatments to patients in order to dose genetic medicines to effect, which is not possible with current gene therapies.

Tessera has licensed Flagship Pioneering’s intellectual property estate, which was begun in 2018 with seminal patent filings supporting both RNA and DNA Gene Writing technologies.

Tessera’s Scientific Advisory Board includes Luigi Naldini, David Schaffer, Andrew Scharenberg, Nancy Craig, George Church, Jonathan Weissman, and John Moran, who collectively have decades of experience in developing gene therapies and gene editing technologies, and also have commercial expertise from 4D, UniQure, Casebia, Cellectis, Magenta, and Editas. Tessera’s Board of Directors includes John Mendlein, Flagship Executive Partner and former CEO of multiple companies; Melissa Moore, Chair of Tessera’s Scientific Advisory Board, Chief Scientific Officer of Moderna, member of the National Academy of Sciences, and founding co-director of the RNA Therapeutics Institute; Geoffrey von Maltzahn; and Noubar Afeyan. The 30-person R&D team at Tessera has deep genetic medicine and startup expertise, including alumni from Editas, Intellia, Beam, Casebia, and Moderna.

About Tessera Therapeutics
Tessera Therapeutics is an early-stage life sciences company pioneering Gene Writing™, a new biotechnology designed to offer scientists and doctors the ability to write and rewrite small and large therapeutic messages into the genome, thereby curing diseases at their source. Gene Writing holds the potential to become a new category in genetic medicine, building upon recent breakthroughs in gene therapy and gene editing, while eliminating important limitations in their reach, utilization and efficacy. Tessera Therapeutics was founded by Flagship Pioneering, a life sciences innovation enterprise that conceives, resources, and develops first-in-class category companies to transform human health and sustainability.

About Flagship Pioneering
Flagship Pioneering conceives, creates, resources, and develops first-in-category life sciences companies to transform human health and sustainability. Since its launch in 2000, the firm has applied a unique hypothesis-driven innovation process to originate and foster more than 100 scientific ventures, resulting in over $34 billion in aggregate value. To date, Flagship is backed by more than $4.4 billion of aggregate capital commitments, of which over $1.9 billion has been deployed toward the founding and growth of its pioneering companies alongside more than $10 billion of follow-on investments from other institutions. The current Flagship ecosystem comprises 41 transformative companies, including Axcella Health (NASDAQ: AXLA), Denali Therapeutics (NASDAQ: DNLI), Evelo Biosciences (NASDAQ: EVLO), Foghorn Therapeutics, Indigo Ag, Kaleido Biosciences (NASDAQ: KLDO), Moderna (NASDAQ: MRNA), Rubius Therapeutics (NASDAQ: RUBY), Sana Biotechnology, Seres Therapeutics (NASDAQ: MCRB), and Syros Pharmaceuticals (NASDAQ: SYRS). – Flagship Pioneering

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

I’ve shown before that the 5G – Covid – vaccines connection is actually DATA.
Now we learn it’s energy too.
Of course this is not a novel idea, but the announcement made by Georgia tech and more counts as an official confirmation that they pursue this concept

To get the effect above you also need this:

No more device batteries? Researchers at Georgia Institute of Technology’s ATHENA lab discuss an innovative way to tap into the over-capacity of 5G networks, turning them into “a wireless power grid” for powering Internet of Things (IoT) devices. The breakthrough leverages a Rotman lens-based rectifying antenna capable of millimeter-wave harvesting at 28 GHz. The innovation could help eliminate the world’s reliance on batteries for charging devices by providing an alternative using excess 5G capacity. – Georgia Tech, March 2021

We Could Really Have a Wireless Power Grid That Runs on 5G

This tech might make us say goodbye to batteries for good.

POPULAR MECHANICS APR 30, 2021a georgia tech athena group member holds an inkjet printed prototype of a mm wave harvester the researchers envision a future where iot devices will be powered wirelessly over 5g networksCOURTESY OF CHRISTOPHER MOORE / GEORGIA TECH

  • Researchers at Georgia Tech have come up with a concept for a wireless power grid that runs on 5G’s mm-wave frequencies.
  • Because 5G base stations beam data through densely packed electromagnetic waves, the scientists have designed a device to capture that energy.
  • The star of the show is a specialized Rotman lens that can collect 5G’s electromagnetic energy from all directions.

If you’ve ever owned a Tile tracker—a square, white Bluetooth beacon that connects to your phone to help keep tabs on your wallet, keys, or whatever else you’re prone to losing—you’re familiar with low-power Internet-of-Things (IoT) devices.

Just like other small IoT devices, from voice assistants to tiny chemical sensors that can detect gas leaks, Tile trackers require a power source. It’s not realistic to hook these gadgets up to a wall outlet, and having to constantly change batteries is a waste of time that’s ultimately bad for the environment.

But what if you could wirelessly charge those devices with a power source that’s already all around you? Researchers at Georgia Tech have dreamed up this kind of “wireless power grid” with a small device that harvests the electromagnetic energy that 5G base stations routinely emit.

Just like the 3G and 4G cell phone towers that came before, 5G base stations radiate electromagnetic energy. At the moment, we’re only harnessing these precious bands of energy to transfer data (which helps you download your favorite Netflix series at lightning speeds).This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

With some crafty engineering, it’s possible to use 5G’s waves of energy as a form of wireless power, says Manos Tentzeris, Ph.D., a professor of flexible electronics at Georgia Tech. He leads the university’s ATHENA research group, where his team has fabricated a specialized Rotman lens “rectenna” that makes this energy collection possible.

If the idea takes off, this tiny device—which is really a small, high-tech sticker—can use the wireless power grid to charge up far more devices than just your Tile tracker. Your cell phone providers could start beaming out electricity to power all kinds of small electronics, from delivery drones to tracking tags for pallets in a “smart warehouse.” The possibilities are truly endless.

“If you’re talking about real-world implementation of all of these ambitious projects, such as IoT, smart cities, or digital twins … you need to have wireless sensors everywhere,” Tentzeris tells Pop Mech. “But currently, all of them need to have batteries.”

But Wait, How Does 5G Create Power?

5g base stations

Let’s start out with the basics: 5G technically is energy.

5G can seem like a black box to those of us who aren’t electrical engineers, but the premise hinges on something we can all understand: electromagnetic energy. Consider the visible spectrum, or all of the light you can see. It exists along the larger electromagnetic spectrum, but it’s really just a blip.

In the graphic below, you can see the visible spectrum is just between ultraviolet and infrared light, or between 400 and 700 nanometers. As energy increases along the electromagnetic spectrum, the waves become shorter and shorter—notice gamma rays are far more powerful, and have more densely packed waves than FM radio, for example. Human eyes can’t detect these waves of energy.

electromagnetic spectrum

PRINCIPLES OF STRUCTURAL CHEMISTRY

5G is also invisible and operates at a higher frequency than other communication standards we’re used to, like 3G or 4G. Those networks work at frequencies between about 1 to 6 gigahertz, while experts say 5G sits closer to the band between 24 and 90 gigahertz.

Because 5G waves function at a higher frequency, they’re more powerful, but also shorter in length. This is the primary reason why new infrastructure (like small 5G cells installed on utility poles) is required for 5G deployment: the waves have different characteristics. Shorter waves, for example, will see more interference from objects like trees and skyscrapers, and even droplets of rain or flakes of snow.

But don’t think of a city’s constellation of 5G base stations as wasteful. Old standards, like 3G and 4G, are known for indiscriminately emitting power from massive service towers in all directions, beaming significant amounts of untapped energy. 5G base stations are much more efficient, says Jimmy Hester, Ph.D., a Georgia Tech alum who serves as senior lab advisor to the ATHENA group.

“Because they operate at high frequencies, [5G base stations] are much better able to focalize [power]. So there’s less waste in a sense,” Hester tells Pop Mech. “What we’re talking about is more of an intentional energization of the devices, themselves, by focalizing the beam towards the device in order to turn it on and power it.”

A ‘Tarantula’ Lens Takes Shape

rotman lens
The Rotman lens, pictured at the far right, can collect energy from multiple directions. IMAGE COURTESY OF GEORGIA TECH’S ATHENA GROUP

There’s a drawback to this efficient focalization: 5G base stations transmit energy in a limited field of view. Think of it like a beam of energy moving in one direction, rather than a circle of energy emanating from a tower. The researchers call it a “pencil beam.” How could a small device precisely snatch up energy from all of these scattered base stations, especially when you can’t see the direction in which the waves are traveling?

Enter the Rotman lens, the key technology behind the team’s breakthrough energy-harvesting device. You can see Rotman lenses at work in military applications, like radar surveillance systems meant to identify targets in all directions without having to actually move the antenna. This isn’t the prototypical lens you’re used to seeing in a pair of glasses or in a microscope. It’s a flexible lens with metal backing, the team explains in a new research paper published in Scientific Reports.

“THE LENS IS LIKE A TARANTULA…[IT] CAN LOOK IN SIX DIFFERENT DIRECTIONS.”

“The same way the lens in your camera collects all of the [light] waves from any direction, and combines it to one point…to create an image, that’s exactly how [this] lens works,” Aline Eid, a Ph.D. student and senior researcher at the ATHENA lab, tells Pop Mech. “The lens is like a tarantula … because a tarantula has six eyes, and our system can also look in six different directions.”

The Rotman lens increases the energy collecting device’s field of view from the “pencil beam” of about 20 degrees to more than 120 degrees, Eid says, making it easier to collect millimeter-wave energy in the 28-gigahertz band. So even if you slapped the sticker onto a moving drone, you could still reliably collect energy from 5G base stations all over a city.

“If you stick these devices on a window, or if you stick these devices on a light pole, or in the middle of an orchard, you’re not going to know the map of the strongest-power base stations,” Tentzeris explains. “We had to make our harvesting devices direction agnostic.”

Your Cell Phone Plan, Reimagined

researchers at georgia tech hold up their rotman lens rectenna

COURTESY OF CHRISTOPHER MOORE / GEORGIA TECH

Tentzeris says he and his colleagues are looking for funding and eager to work with telecom companies. It makes sense: these companies could integrate the rectenna stickers around cities to augment the 5G networks they’re already building out. The end result could be a sort of new-age cell phone plan.

“In the beginning of the 2000s, companies moved from voice to data. Now, using this technology, they can add power to data/communication as well,” Tentzeris says.

Right now, the rectenna stickers can’t collect a huge amount of power—just about 6 microwatts of electricity, or enough to power some small IoT devices, from 180 meters away. But in lab tests, the device is still able to gather about 21 times more energy than similar devices in development.This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Plus, accessibility is on the team’s side, since the system is fully printable. Tentzeris says it only costs a few cents to produce one unit through additive manufacturing. With that in mind, he says it’s possible to embed the rectenna sticker into a wearable or even stitch it into clothing.

“Scalability was very important, you’re talking about billions of devices,” Tentzeris says. “You could have a great prototype working in the lab, but when somebody asks, ‘Can everybody use it?’ you need to be able to say yes.” – POPULAR MECHANICS 2021

This is antiquated stuff by 2021 standards, but gives you an idea. Initially, much of the nanotech was powered by the body electricity, so it had very limited capabilities. 5G could power true robots.

ATHENA (Agile Technologies for High-performance Electromagnetic Novel Applications)

The ATHENA (Agile Technologies for High-performance Electromagnetic Novel Applications) group at Georgia Tech, led by Dr. Manos Tentzeris, explores advances and development of novel technologies for electromagnetic, wireless, RF and mm-wave applications in the telecom, defense, space, automotive and sensing areas.

This Manos guy is all ANTENNAS

In detail, the research activities of the 15-member group include Highly Integrated 3D RF Front-Ends for Convergent (Telecommunication,Computing and Entertainment) Applications, 3D Multilayer Packaging for RF and Wireless modules, Microwave MEM’s, SOP-integrated antennas (ultrawideband, multiband, ultracompact) and antenna arrays using ceramic and conformal organic materials and Adaptive Numerical Electromagnetics (FDTD, MultiResolution Algorithms).

The group includes the RFID/Sensors subgroup which focuses on the development of paper-based RFID’s and RFID-enabled “rugged” sensors with printed batteries and power-scavenging devices operating in a variety of frequency bands [13.56 MHz-60 GHz]. In addition, members of the group deal with Bio/RF applications (e.g. breast tumor detection), micromachining (e.g elevated patch antennas) and the development of novel electromagnetic simulator technologies and its applications to the design and optimization of modern RF/Microwave systems.

The numerical activity of the group primarily includes the finite-difference time-domain (FDTD) and multiresolution time-domain (MRTD) simulation techniques. It also covers hybrid numerical simulators capable of modeling multiple physical effects, such as electromagnetics and mechanical motion in MEMS devices and the combined effect of thermal, semiconductor electron transport, and electromagnetics for RF modules containing solid state devices.

The group maintains a 32 processor Linux Beowulf cluster to run its optimized parallel electromagnetic codes. In addition, the group uses these codes to develop novel microwave devices and ultracompact multiband antennas in a number of substrates and utilizes multilayer technology to miniaturize the size and maximize performance. Examples of target applications include cellular telephony (3G/4G), WiFi, WiMAX, Zigbee and Bluetooth, RFID ISO/EPC_Gen2, LMDS, radar, space applications, millimeter-wave sensors and surveillance devices and emerging standards for frequencies from 800MHz to 100GHz.

The activities are sponsored by NSF, NASA, DARPA and a variety of US and international corporations.ATHENA

SOURCE

MIT can charge implants with external wireless power from 125 feet away

By Digital Trends — Posted on June 6, 2018

Smart implants designed for monitoring conditions inside the body, delivering drug doses, or otherwise treating diseases are clearly the future of medicine. But, just like a satellite is a useless hunk of metal in space without the right communication channels, it’s important that we can talk to these implants. Such communication is essential, regardless of whether we want to relay information and power to these devices or receive data in return.

Fortunately, researchers from Massachusetts Institute of Technology (MIT) and Brigham and Women’s Hospital may have found a way to help. Scientists at these institutes have developed a new method to power and communicate with implants deep inside the human body.

“IVN (in-vivo networking) is a new system that can wirelessly power up and communicate with tiny devices implanted or injected in deep tissues,” Fadel Adib, an assistant professor in MIT’s Media Lab, told Digital Trends. “The implants are powered by radio frequency waves, which are safe for humans. In tests in animals, we showed that the waves can power devices located 10 centimeters deep in tissue, from a distance of one meter.”

These same demonstration using pigs showed that it is possible to extend this one-meter range up to 38 meters (125 feet), provided that the sensors are located very close to the skin’s surface. These sensors can be extremely small, due to their lack of an onboard battery. This is different from current implants, such as pacemakers, which have to power themselves since external power sources are not yet available. For their demo, the scientists used a prototype sensor approximately the size of a single grain of rice. This could be further shrunk down in the future, they said.

“The incorporation of [this] system in ingestible or implantable device could facilitate the delivery of drugs in different areas of the gastrointestinal tracts,” Giovanni Traverso, an assistant professor at Brigham and Women’s Hospital and Harvard Medical School, told us. “Moreover, it could aid in sensing of a range of signals for diagnosis, and communicating those externally to facilitate the clinical management of chronic diseases.”

The IVN system is due to be shown off at the Association for Computing Machinery Special Interest Group on Data Communication (SIGCOMM) conference in August.

More info : https://www.media.mit.edu/projects/ivn-in-vivo-networking/overview/

Click to access IVN-paper.pdf

Buh-bye, Human race, you’ve just been assimilated by the Borg!

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

It’s a bit too late, but you can start freaking out

Initially I didn’t pay much attention to these reports because first ones were pretty vague and seemed unsubstantiated. They kind of were.
But then they started to become more and more detailed, coherent and very specific. My own research on #biohacking started to intersect more often, to the point where today they almost coincide.

To better understand where I’m coming from, your journey needs to start here:

Yes, they CAN vaccinate us through nasal test swabs AND target the brain (Biohacking P.1)

and here:

OBAMA, DARPA, GSK AND ROCKEFELLER’S $4.5B B.R.A.I.N. INITIATIVE – BETTER SIT WHEN YOU READ

After you read these, it’s much easier to dive into these new findings:

SOURCE
SOURCE
“cross the blood-brain barrier” as in “ Yes, they CAN vaccinate us through nasal test swabs AND target the brain

Profusa, Inc. Awarded $7.5M DARPA Grant to Develop Tissue-integrated Biosensors for Continuous Monitoring of Multiple Body Chemistries


NEWS PROVIDED BY Profusa, Inc. 

Jul 12, 2016, 08:30 ET


SOUTH SAN FRANCISCO, Calif., July 12, 2016 /PRNewswire/ — Profusa, Inc., a leading developer of tissue-integrated biosensors, today announced that it was awarded a $7.5 million dollar grant from the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office (ARO) to develop implantable biosensors for the simultaneous, continuous monitoring of multiple body chemistries. Aimed at providing real-time monitoring of a combat soldier’s health status to improve mission efficiency, the award supports further development of the company’s biosensor technology for real-time detection of the body’s chemical constituents. DARPA and ARO are agencies of the U.S. Department of Defense focused on the developing emerging technologies for use by the military.

SOURCE

“Profusa’s vision is to replace a point-in-time chemistry panel that measures multiple bio­markers, such as oxygen, glucose, lactate, urea, and ions with a biosensor that provides a continuous stream of wireless data,” said Ben Hwang, Ph.D., Profusa’s chairman and chief executive officer. “DARPA’s mission is to make pivotal investments in breakthrough tech­nologies for national security. We are gratified to be awarded this grant to accelerate the development of our novel tissue-integrating sensors for application to soldier health and peak performance.”

Tissue-integrating Biosensors for Multiple Biomarkers
Supported by DARPA, ARO and the National Institutes of Health, Profusa’s technology and unique bioengineering approach overcomes the largest hurdle in long-term use of biosensors in the body: the foreign body response. Placed just under the skin with a specially designed injector, each tiny biosensor is a flexible fiber, 2 mm-to-5 mm long and 200-500 microns in dia­meter. Rather than being isolated from the body, Profusa’s biosensors work fully integrated within the body’s tissue — without any metal device or electronics — overcoming the effects of the foreign body response for more than one year.

Each biosensor is comprised of a bioengineered “smart hydrogel” (similar to contact lens mater­ial) forming a porous, tissue-integrating scaffold that induces capillary and cellular in-growth from surrounding tissue. A unique property of the smart gel is its ability to luminesce upon exposure to light in proportion to the concentration of a chemical such as oxygen, glucose or other biomarker.

“Long-lasting, implantable biosensors that provide continuous measurement of multiple body chemistries will enable monitoring of a soldier’s metabolic and dehydration status, ion panels, blood gases, and other key physiological biomarkers,” said Natalie Wisniewski, Ph.D., the principal investigator leading the grant work and Profusa’s co-founder and chief technology officer. “Our ongoing program with DARPA builds on Profusa’s tissue-integrating sensor that overcomes the foreign body response and serves as a technology platform for the detection of multiple analytes.”

Lumee Oxygen Sensing System™
Profusa’s first medical product, the Lumee Oxygen Sensing System, is a single-biomarker sensor designed to measure oxygen. In contrast to blood oxygen reported by other devices, the system incorporates the only technology that can monitor local tissue oxygen. When applied to the treatment of peripheral artery disease (PAD), it prompts the clinician to provide therapeutic action to ensure tissue oxygen levels persist throughout the treatment and healing process.

Pending CE Mark, the Lumee system is slated to be available in Europe in 2016 for use by vascular surgeons, wound-healing specialists and other licensed healthcare providers who may benefit in monitoring local tissue oxygen. PAD affects 202 million people worldwide, 27 million of whom live in Europe and North America, with an annual economic burden of more than $74 billion in the U.S. alone.

Profusa, Inc.
Profusa, Inc., based in South San Francisco, Calif., is leading the development of novel tissue-integrated sensors that empowers an individual with the ability to monitor their unique body chemistry in unprecedented ways to transform the management of personal health and disease. Overcoming the body’s response to foreign material for long-term use, its technology promises to be the foundational platform of real-time biochemical detection through the development of tiny bioengineered sensors that become one with the body to detect and continuously transmit actionable, medical-grade data for personal and medical use. See http://www.profusa.com for more information.

The research is based upon work supported by DARPA, the Biological Technologies Office (BTO), and ARO grant [W911NF-16-1-0341]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, BTO, the ARO, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.

SOURCE Profusa, Inc.

Related Links

http://www.profusa.com

SOURCE
SOURCE

I SAVED THE BEST FOR LAST

SOURCE
and then you wonder why…

So I can’t say with 100% certitude that what DARPA did and what people found are one and the same thing, but this is the closest you can get to 100%, and 200% x reason to freak out.

I will keep adding resources and details here, but my point is made.

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

Sometimes my memes are 3D. And you can own them. Or send them to someone.
You can even eat some of them.
CLICK HERE

DNA harvesting, mRNA technologies, mind-reading and more – this was the official race start signal at the Transhumanist Olympics, all the way back in 2013

The vision for the BRAIN Initiative is to combine these areas of research into a coherent, integrated science of cells, circuits, brain and behavior.

  • Generate a census of brain cell types
  • Create structural maps of the brain
  • Develop new, large-scale neural network recording capabilities
  • Develop a suite of tools for neural circuit manipulation
  • Link neuronal activity to behavior
  • Integrate theory, modeling, statistics and computation with neuroscience experiments
  • Delineate mechanisms underlying human brain imaging technologies
  • Create mechanisms to enable collection of human data for scientific research
  • Disseminate knowledge and training

Source: NIH

You mean THIS DARPA?
Yeah, this one…

How The BRAIN Initiative® workS 

Given the ambitious scope of this pioneering endeavor, it was vital that planning be informed by a wide range of expertise and experience. Therefore, NIH established a high level working group of the Advisory Committee to the NIH Director (ACD) to help shape this new initiative.

This working group, co-chaired by Dr. Cornelia “Cori” Bargmann (The Rockefeller University) and Dr. William Newsome (Stanford University) sought broad input from the scientific community, patient advocates, and the general public. Their report, BRAIN 2025: A Scientific Vision, released in June 2014 and enthusiastically endorsed by the ACD, articulated the scientific goals of The BRAIN Initiative® and developed a multi-year scientific plan for achieving these goals, including timetables, milestones, and cost estimates.

Of course, a goal this audacious will require ideas from the best scientists and engineers across many diverse disciplines and sectors. Therefore, NIH is working in close collaboration with other government agencies, including the Defense Advanced Research Projects Agency (DARPA), National Science Foundation (NSF), the U.S. Food and Drug Administration (FDA) and Intelligence Advanced Research Projects Activity (IARPA). Private partners are also committed to ensuring success through investment in The BRAIN Initiative®.

Five years ago a project such as this would have been considered impossible. Five years from now will be too late. While the goals are profoundly ambitious, the time is right to inspire a new generation of neuroscientists to undertake the most groundbreaking approach ever contemplated to understanding how the brain works, and how disease occurs.
Source: NIH

The White House Office of the Press Secretary, For Immediate Release, April 02, 2013

Remarks by the President on the BRAIN Initiative and American Innovation

East Room  10:04 A.M. EDT 

THE PRESIDENT:  

Thank you so much.  (Applause.)  

Thank you, everybody.  Please have a seat.  Well, first of all, let me thank Dr. Collins not just for the introduction but for his incredible leadership at NIH.  Those of you who know Francis also know that he’s quite a gifted singer and musician.  So I was asking whether he was going to be willing to sing the introduction — (laughter) — and he declined. But his leadership has been extraordinary.  And I’m glad I’ve been promoted Scientist-in-Chief.  (Laughter.)

 Given my grades in physics, I’m not sure it’s deserving.  But I hold science in proper esteem, so maybe that gives me a little credit. Today I’ve invited some of the smartest people in the country, some of the most imaginative and effective researchers in the country — some very smart people to talk about the challenge that I issued in my State of the Union address:  to grow our economy, to create new jobs, to reignite a rising, thriving middle class by investing in one of our core strengths, and that’s American innovation.  Ideas are what power our economy.  It’s what sets us apart.  It’s what America has been all about.  We have been a nation of dreamers and risk-takers; people who see what nobody else sees sooner than anybody else sees it.  We do innovation better than anybody else — and that makes our economy stronger.  

When we invest in the best ideas before anybody else does, our businesses and our workers can make the best products and deliver the best services before anybody else.   And because of that incredible dynamism, we don’t just attract the best scientists or the best entrepreneurs — we also continually invest in their success.  We support labs and universities to help them learn and explore.  And we fund grants to help them turn a dream into a reality.  And we have a patent system to protect their inventions.  And we offer loans to help them turn those inventions into successful businesses.   

And the investments don’t always pay off.  But when they do, they change our lives in ways that we could never have imagined.  Computer chips and GPS technology, the Internet — all these things grew out of government investments in basic research.  And sometimes, in fact, some of the best products and services spin off completely from unintended research that nobody expected to have certain applications.  

Businesses then used that technology to create countless new jobs. 

So the founders of Google got their early support from the National Science Foundation.  The Apollo project that put a man on the moon also gave us eventually CAT scans.  And every dollar we spent to map the human genome has returned $140 to our economy — $1 of investment, $140 in return.

 Dr. Collins helped lead that genome effort, and that’s why we thought it was appropriate to have him here to announce the next great American project, and that’s what we’re calling the BRAIN Initiative.   

As humans, we can identify galaxies light years away, we can study particles smaller than an atom.  But we still haven’t unlocked the mystery of the three pounds of matter that sits between our ears.  (Laughter.)  But today, scientists possess the capability to study individual neurons and figure out the main functions of certain areas of the brain.  But a human brain contains almost 100 billion neurons making trillions of connections.  

So Dr. Collins says it’s like listening to the strings section and trying to figure out what the whole orchestra sounds like.  So as a result, we’re still unable to cure diseases like Alzheimer’s or autism, or fully reverse the effects of a stroke.  And the most powerful computer in the world isn’t nearly as intuitive as the one we’re born with. So there is this enormous mystery waiting to be unlocked, and the BRAIN Initiative will change that by giving scientists the tools they need to get a dynamic picture of the brain in action and better understand how we think and how we learn and how we remember.  And that knowledge could be — will be — transformative.   In the budget I will send to Congress next week, I will propose a significant investment by the National Institutes of Health, DARPA, and the National Science Foundation to help get this project off the ground.

 I’m directing my bioethics commission to make sure all of the research is being done in a responsible way.  And we’re also partnering with the private sector, including leading companies and foundations and research institutions, to tap the nation’s brightest minds to help us reach our goal. And of course, none of this will be easy.  If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here.  (Laughter.)  It could explain all kinds of things that go on in Washington.  (Laughter.)  We could prescribe something.  (Laughter.)  

So it won’t be easy.  But think about what we could do once we do crack this code.  Imagine if no family had to feel helpless watching a loved one disappear behind the mask of Parkinson’s or struggle in the grip of epilepsy.  Imagine if we could reverse traumatic brain injury or PTSD for our veterans who are coming home.  Imagine if someone with a prosthetic limb can now play the piano or throw a baseball as well as anybody else, because the wiring from the brain to that prosthetic is direct and triggered by what’s already happening in the patient’s mind.  What if computers could respond to our thoughts or our language barriers could come tumbling down.  Or if millions of Americans were suddenly finding new jobs in these fields — jobs we haven’t even dreamt up yet — because we chose to invest in this project. That’s the future we’re imagining.  That’s what we’re hoping for.  That’s why the BRAIN Initiative is so absolutely important.  And that’s why it’s so important that we think about basic research generally as a driver of growth and that we replace the across-the-board budget cuts that are threatening to set us back before we even get started.  

A few weeks ago, the directors of some of our national laboratories said that the sequester — these arbitrary, across-the-board cuts that have gone into place — are so severe, so poorly designed that they will hold back a generation of young scientists.  When our leading thinkers wonder if it still makes sense to encourage young people to get involved in science in the first place because they’re not sure whether the research funding and the grants will be there to cultivate an entire new generation of scientists, that’s something we should worry about.  We can’t afford to miss these opportunities while the rest of the world races ahead.  We have to seize them.  I don’t want the next job-creating discoveries to happen in China or India or Germany.  I want them to happen right here, in the United States of America.   And that’s part of what this BRAIN Initiative is about.  That’s why we’re pursuing other “grand challenges” like making solar energy as cheap as coal or making electric vehicles as affordable as the ones that run on gas.  They’re ambitious goals, but they’re achievable.  And we’re encouraging companies and research universities and other organizations to get involved and help us make progress. We have a chance to improve the lives of not just millions, but billions of people on this planet through the research that’s done in this BRAIN Initiative alone.  

But it’s going to require a serious effort, a sustained effort.  And it’s going to require us as a country to embody and embrace that spirit of discovery that is what made America, America. They year before I was born, an American company came out with one of the earliest mini-computers.  It was a revolutionary machine, didn’t require its own air conditioning system.  That was a big deal.  It took only one person to operate, but each computer was eight feet tall, weighed 1,200 pounds, and cost more than $100,000.  And today, most of the people in this room, including the person whose cell phone just rang — (laughter) — have a far more powerful computer in their pocket.  Computers have become so small, so universal, so ubiquitous, most of us can’t imagine life without them — certainly, my kids can’t.   And, as a consequence, millions of Americans work in fields that didn’t exist before their parents were born.  Watson, the computer that won “Jeopardy,” is now being used in hospitals across the country to diagnose diseases like cancer.  That’s how much progress has been made in my lifetime and in many of yours.  That’s how fast we can move when we make the investments.   

But we can’t predict what that next big thing will be.  We don’t know what life will be like 20 years from now, or 50 years, or 100 years down the road.  What we do know is if we keep investing in the most prominent, promising solutions to our toughest problems, then things will get better. I don’t want our children or grandchildren to look back on this day and wish we had done more to keep America at the cutting edge.  I want them to look back and be proud that we took some risks, that we seized this opportunity.  That’s what the American story is about.  That’s who we are.  

That’s why this BRAIN Initiative is so important.  And if we keep taking bold steps like the one we’re talking about to learn about the brain, then I’m confident America will continue to lead the world in the next frontiers of human understanding.  And all of you are going to help us get there. 

So I’m very excited about this project.  Francis, let’s get to work.  God bless you and God bless the United States of America.  Thank you.  (Applause.)  

A LITTLE EARLIER, AT DARPA’S

DARPA Fold F(x) Program to Advance Synthetic Biomedical Polymers

by Global Biodefense StaffJanuary 21, 2014

The Defense Advanced Research Projects Agency (DARPA) is soliciting proposals for advancing “Folded Non-Natural Polymers with Biological Function” under a new Broad Agency Announcement for the Fold F(x) program.

While the biopharmaceutical industry has realized many outstanding protein and oligonucleotide reagents and medicines by screening large biopolymer libraries for desired function, significant technical gaps remain to rapidly address the full suite of existing and anticipated national security threats in DoD medicine (e.g., diagnostics and remediation strategies for chemical/biological warfare agents and infectious disease threats).

The objective of Fold F(x) is to develop processes enabling the rapid synthesis, screening, sequencing and scale-up of folded, non-natural, sequence-defined polymers with expanded functionality. The program will specifically address the development of non-natural affinity reagents that can bind and respond to a selected target, as well as catalytic systems that can either synthesize or degrade a desired target.

While non-natural folding polymers (e.g., foldamers) are known, broad utilization of these systems is currently limited because there is no available approach for rapidly developing and screening large non-natural polymer libraries. Fold F(x) will address this technical gap to create new molecular entities that will become future critical reagents in sensor and diagnostic applications, novel medicine leads against viral and bacterial threats, and new polymeric materials for future material science applications.

DARPA anticipates that successful efforts will include (1) novel synthetic approaches that yield large libraries (>109 members) of non-natural sequence-defined polymers; (2) flexible screening strategies that enable the selection of high affinity/specificity binders and high activity/selectivity catalysts from the non-natural libraries; (3) demonstration that the screening approach can rapidly (<4 days) yield affinity reagents or catalysts against targets of interest to the DoD; and (4) demonstration of scalability and transferability to the DoD scientific community.

DARPA seeks proposals that significantly advance the area of non-natural polymer synthesis, screening and sequencing for DoD-relevant threats. Proposals that simply provide evolutionary improvements in state-of-the-art technology will not be considered.

A Proposers’ Day Webinar for the Fold F(x) Program will be held on January 28, 2014. Further details are available under Solicitation Number: DARPA-BAA-14-13. White papers are due by February 6, 2014.

Source: FBO.gov

They deleted this from their website, but not from Internet

FOLDED NON-NATURAL POLYMERS WITH BIOLOGICAL FUNCTION (FOLD F(X))

Health threats often evolve more quickly than health solutions. Despite ongoing research in the government and the biopharmaceutical industry to identify new therapies, the Department of Defense currently lacks the tools to address the full spectrum of chemical, biological, and disease threats that could impact the readiness of U.S. forces. DARPA created the Folded Non-Natural Polymers with Biological Function program (Fold F(x)) to give DoD medical researchers new tools to develop medicines, sensors, and diagnostics using new libraries of synthetic polymers.

The human body contains natural, folded polymers such as DNA, RNA, and proteins. These are made up of strings of specific biological molecules, or monomers, with the potential for massive variation in sequence, structure, and function. The body’s library of natural polymers is massive, but ultimately limited by the number of naturally present monomers. Through Fold F(x), DARPA is looking to expand the body’s biomolecular arsenal using non-natural, sequence-dictated polymers built from lab-created monomers.

Broad use of folded, non-natural polymers has been limited because no approach yet exists for rapidly developing large libraries of such sequence-dictated polymers. However, recent advances in the theory for predicting folds in polymer structure enable a more targeted search for polymers with specific attributes. Additionally, new, high-throughput analytical chemistry tools may enable researchers to efficiently screen massive subsets of polymers to essentially find the needle in the haystack to confront a given health threat. Finally, recently developed tools for determining polymer structure, function, and in vivo effects can further accelerate the characterization of promising non-natural polymers once they have been identified.

To achieve its objective, Fold F(x) seeks to develop the following capabilities: 1) processes that enable rapid, high-fidelity synthesis of monomers and polymer libraries at scale; 2) automated screening of polymers against a target; and 3) automated sequencing and characterization of successful polymers. The capabilities developed will need to be generalized and extendable so they can be applied to a broad range of potential applications.

If Fold F(x) is successful, synthetic polymers, produced at low cost in libraries containing trillions of combinations, would give scientists vastly more molecules to work with in the search for new health solutions and greatly increase the likelihood that a molecule can be found to combat a given health threat. Synthetic polymers would also offer other benefits over natural polymers including greater lifetime in the blood and less immunogenicity.

LATER…

DOES THIS REMIND YOU OF ANY PARTICULAR IMPLANT: SRI Biosciences DARPA Fold F(X) Synthetic Polymers Contract

by CBRNE CENTRAL STAFF, February 11, 2015, 11:33

SRI Biosciences, a division of SRI International, has been awarded a $10 million contract under a Defense Advanced Research Projects Agency (DARPA) program to reimagine how proteins are constructed and to develop novel medicines and diagnostics as countermeasures to chemical and biological threats.

The new contract is part of DARPA’s Folded Non-Natural Polymers with Biological Function program, known as Fold F(x). The initial goal of the program will be to develop biologically active non-natural polymers that are structurally similar to naturally occurring proteins, but without their limitations, such as sensitivity to heat denaturation or chemical degradation.

To develop the new polymers, SRI is combining its expertise in medicinal chemistry and biopolymer design with a breakthrough approach to screening vast numbers of compounds. The novel polymers are being made from entirely new types of monomer structures based on drug-like scaffolds with high functional group densities.

SRI’s compound screening innovation is based on its proprietary Fiber-Optic Array Scanning Technology (FASTcell™). Originally developed to identify circulating tumor cells in a blood sample, FASTcell can distinguish a single tumor cell among tens of millions of healthy ones in a few minutes.

With DARPA support, SRI is expanding this technology to screen 25 million compounds in just one minute.

“Our goal is to develop a method that can enable rapid, large-scale responses to a bioterrorism threat or an infectious disease epidemic,” said Peter Madrid, Ph.D., program director in SRI Biosciences’ Center for Chemical Biology and co-principal investigator and leader of the chemistry effort of the project. “We are looking for non-natural polymers to detect or neutralize identified chemical or biological threats. Once we find potent molecules, we will be able to produce them at mass scale.”

The overall goal of the Fold F(x) program is to expand on the utility of proteins and DNA, and to overcome their limitations by re-engineering their polymer backbones and side chain diversity—creating new molecules with improved functionality such as stability, potency and catalytic function in environments usually hostile for biopolymers.

The knowledge to design new functional molecules from first principles doesn’t exist yet. The alternative is to synthesize enormous libraries of non-natural polymers and screen for sequences that have a desired action. Finding a single effective compound, such as one that can block a virus, may require screening hundreds of millions of compounds.

“We are taking a full departure from how nature does things to come up with new ways of mimicking protein function in a highly tailored and controlled way,” said Nathan Collins, Ph.D., executive director of SRI Biosciences’ Discovery Sciences Section and principal investigator of SRI’s Fold F(x) project. “Our breakthrough has been to adapt SRI’s FASTcell technology to screen libraries of non-natural polymers. It’s very exciting to be doing such novel research.”

Initially the program will focus on screening massive numbers of non-natural polymers for potential uses against security threats.

As a proof of concept, the team will design, synthesize and screen chemically unique libraries of 100 million non-natural polymers for activity against a variety of agents, including toxins such as ricin and viruses such as the H1N1 bird flu strain of influenza.

As the program evolves it may progress to include a range of possibilities, such as how to synthesize molecules to fold such that they emit light, have enhanced levels of strength or elasticity, or store power.

Sources: SRI International, DARPA

Stargate Project

From Wikipedia, the free encyclopedia

Stargate Project was the 1991 code name for a secret U.S. Army unit established in 1978 at Fort MeadeMaryland, by the Defense Intelligence Agency (DIA) and SRI International (a California contractor) to investigate the potential for psychic phenomena in military and domestic intelligence applications. The Project, and its precursors and sister projects, originally went by various code names—GONDOLA WISH, GRILL FLAME, CENTER LANE, PROJECT CF, SUN STREAK, SCANATE—until 1991 when they were consolidated and rechristened as “Stargate Project”.

Stargate Project work primarily involved remote viewing, the purported ability to psychically “see” events, sites, or information from a great distance.[1] The project was overseen until 1987 by Lt. Frederick Holmes “Skip” Atwater, an aide and “psychic headhunter” to Maj. Gen. Albert Stubblebine, and later president of the Monroe Institute.[2] The unit was small-scale, comprising about 15 to 20 individuals, and was run out of “an old, leaky wooden barracks”.[3]

The Stargate Project was terminated and declassified in 1995 after a CIA report concluded that it was never useful in any intelligence operation. Information provided by the program was vague and included irrelevant and erroneous data, and there was reason to suspect that its project managers had changed the reports so they would fit background cues.[4] The program was featured in the 2004 book and 2009 film, both titled The Men Who Stare at Goats,[5][6][7][8] although neither mentions it by name.

THE LIST OF RESEARCHES THEY FUNDED MIGHT BLOW YOUR BRAIN

FULL LIST HERE

THEIR REPORT BELOW SEEMS TO CONFIRM OUR EARLIER REPORT THAT MRNA IS A GATEWAY TO THE BRAIN AND BEHAVIOURS

SOURCE

READ: Yes, they CAN vaccinate us through nasal test swabs AND target the brain (Biohacking P.1)

Private Sector Partners

Key private sector partners have made important commitments to support the BRAIN Initiative, including:

  • The Allen Institute for Brain Science:  The Allen Institute, a nonprofit medical research organization, is a leader in large-scale brain research and public sharing of data and tools. In March 2012, the Allen Institute for Brain Science embarked upon a ten-year project to understand the neural code: how brain activity leads to perception, decision making, and ultimately action. The Allen Institute’s expansion, with a $300M investment from philanthropist Paul G. Allen in the first four years, was based on the recent unprecedented advances in technologies for recording the brain’s activity and mapping its interconnections.  More than $60M annually will be spent to support Allen Institute projects related to the BRAIN Initiative.
  • Howard Hughes Medical Institute:  HHMI is the Nation’s largest nongovernmental funder of basic biomedical research and has a long history of supporting basic neuroscience research.  HHMI’s Janelia Farm Research Campus in Virginia was opened in 2006 with the goal of developing new imaging technologies and understanding how information is stored and processed in neural networks. It will spend at least $30 million annually to support projects related to this initiative. 
  • Kavli Foundation:  The Kavli Foundation anticipates supporting activities that are related to this project with approximately $4 million dollars per year over the next ten years.  This figure includes a portion of the expected annual income from the endowments of existing Kavli Institutes and endowment gifts to establish new Kavli Institutes over the coming decade. This figure also includes the Foundation’s continuing commitment to supporting project meetings and selected other activities.
  • Salk Institute for Biological Studies:  The Salk Institute, under its Dynamic Brain Initiative, will dedicate over $28 million to work across traditional boundaries of neuroscience, producing a sophisticated understanding of the brain, from individual genes to neuronal circuits to behavior. To truly understand how the brain operates in both healthy and diseased states, scientists will map out the brain’s neural networks and unravel how they interrelate. To stave off or reverse diseases such as Alzheimer’s and Parkinson’s, scientists will explore the changes that occur in the brain as we age, laying the groundwork for prevention and treatment of age-related neurological diseases.

Source: The White House

Kavli are just Rockefeller proxies and partners

“National Institutes of Health chief Francis Collins says the brain initiative builds on recent advances in attaching electronic implants to brain cells. That was demonstrated last year in dramatic scenes of fully paralyzed patients manipulating robot arms to sip coffee and grasp rubber balls. And through increased computer power, scientists are now better able to collect data from the 86 billion vastly interconnected cells within the 3-pound human brain.”

USA Today

White House pitches brain mapping project

April 2, 2013, 12:00 PM CESTBy Peter Alexander and Alastair Jamieson, NBC News and Maggie Fox, Senior Writer

President Obama pitched a human brain research initiative on Tuesday that he likened to the Human Genome Project to map all the human DNA, and said it will not only help find cures for diseases such as Alzheimer’s and autism, but create jobs and drive economic growth…

It’s not clear just what the initiative will do. Obama and collins said they’d appointed a “dream team” of experts to lay out the agenda — they should report back before the end of the summer. They are led by neurobiologists Cori Bargmann of Rockefeller University and William Newsome of Stanford University.

The public-private initiative, with money from groups such as the Howard Hughes Medical Institute and Microsoft co-founder Paul Allen’s brain mapping project, aims to find a way to take pictures of the brain in action in real time.

“We want to understand the brain to know how we reason, how we memorize, how we learn, how we move, how our emotions work. These abilities define us, yet we hardly understand any of it,” said Miyoung Chun, vice president of science programs at The Kavli Foundation, which is taking part in the initiative and which funds basic research in neuroscience and physics.

The project has some big money and some big science to build on. Allen pumped another $300 million into his institute’s brain mapping initiative a year ago, and has published freely available maps of the human and mouse brains. The Howard Hughes Medical Institute built a whole research campus devoted to brain science, called Janelia Farm, in Virginia.

Arati Prabhakar, director of the Defense Advanced Research Projects Agency (DARPA) pointed to a project that allowed a quadriplegic woman to control a robot arm with her thoughts alone.

“There is nothing like a project to inspire people to go to that next level,” Collins told a telephone briefing.

Not everybody is happy about a centralized, administration-led project. Michael Eisen, a biologist at the University of California at Berkeley, said earlier this year that grand projects in biology such as Project ENCODE for DNA analysis were emerging as the “greatest threat” to individual discovery-driven science.

“It’s one thing to fund neuroscience, another to have a centralized 10-year project to ‘solve the brain,'” Eisen wrote in a Twitter update in February.

“It’s great to see the president supporting basic neuroscience research. And the amount of money is enough to seed new initiatives, which is the way to start something,” 

Neuroscientist Cori Bargmann of The Rockefeller University in New York, BRAIN co-chair

Who Will Pay for Obama’s Ambitious Brain Project?

By Stephanie Pappas April 02, 2013, Science Direct

An MRI scan reveals the gross anatomical structure of the human brain. (Image credit: Courtesy FONAR Corporation)

The initial funding for a major new brain research initiative will come largely from the National Institutes of Health and the Defense Advanced Research Projects Agency (DARPA), with contributions from the National Science Foundation and private foundations, officials said today (April 2).

After President Obama announced the launch of the BRAIN Initiative this morning, the directors of the National Institutes of Health (NIH) and DARPA took public questions via the Internet about specific plans for the project and who will pay. The agencies expect about $100 million in 2014 to start the initiative.

BRAIN stands for Brain Research through Advancing Innovative Neurotechnologies. In it’s planning stages, the project was called the Brain Activity Map, because the goal is to understand how neural networks function. Currently, researchers can detect the activities of single brain cells; they can also measure brain activity on the macro level using technology such as functional magnetic resonance imaging. But the middle level — the actions of hundreds and thousands of neurons working together in circuits — remains largely mysterious.

“This initiative is an idea whose time has come,” NIH director Francis Collins said in the White House Q&A session. He called the human brain the “greatest scientific frontier you could think of.” [Gallery: Slicing Through the Brain]

Funding the brain map

President Obama announced this morning that the Fiscal Year 2014 budget would include about $100 million in seed funding for the BRAIN Initiative. Collins broke those numbers down: The NIH will provide about $40 million, much of that from the Neuroscience Blueprint, an NIH collaboration with a rolling investment fund for nervous system research. Some NIH discretionary funds will also go toward the project, Collins said.

The National Science Foundation will provide about $20 million in funding, Collins said, and DARPA will contribute about $50 million. Private foundations, including the Howard Hughes Medical Institute, the Salk Institute for Biological Studies and the Kavli Institute, will also provide funds.

DARPA’s interest in the project stems largely from concerns about “wounded warriors,” said director Arati Prabhakar. The agency hopes the BRAIN Initiative will provide answers about how to treat post-traumatic stress disorder, brain injuries and other neurological problems for injured soldiers. The project may also inspire new computing processes as scientists learn how the brain works and use that as inspiration for artificial circuits, Prabhakar said.

Bumps ahead?

Federal funding for research has been flat in recent years, and the federal budget sequester has further squeezed agencies such as the NIH and NSF with 9 percent cuts across the board. The BRAIN Initiative is projected to last more than a decade, with no guarantee the fiscal situation will bounce back. Some neuroscience researchers, including Donald Stein of the Emory School of Medicine, have argued that funding is a “zero-sum game” and that the BRAIN Initiative will take resources from other worthy brain research causes. 

Collins acknowledged the budget challenge.

“One might well ask, ‘Is this the wrong time to be starting something new and innovative?'” he said.

But with the technology needed to measure large neural networks just coming into its own, delaying would be counterproductive, Collins argued.

“If you could see the opportunity for the next big advance … it would be very hard to say we’re going to hunker down for awhile and wait until the budget gets better,” he said.

A $4.5 Billion Price Tag for the BRAIN Initiative?

By Emily Underwood, Jun. 5, 2014 , 6:00 PM, Science Mag

The price of President Barack Obama’s BRAIN may have just skyrocketed. Last year, the White House unveiled a bold project to map the human brain in action, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, and commanded several federal agencies to quickly develop plans to make it reality. To kick-start the project, the president allocated about $100 million this year to BRAIN, spread over the National Institutes of Health (NIH), the National Science Foundation, and the Defense Advanced Research Projects Agency.

Now, after more than a year of meetings and deliberations, an NIH-convened working group has fleshed out some of the goals and aspirations of BRAIN and tried to offer a more realistic appraisal of the funding needed for the agency’s share of the project: $4.5 billion over the course of a decade.

Neuroscientist Cornelia Bargmann, of Rockefeller University in New York City, who led the working group, sought to put that cost in perspective at a press conference today, saying it amounted to “about one six-pack of beer for each American over the entire 12 years of the program.”

NIH, which provides $40 million of BRAIN’s current funding, doesn’t have a plan in place for where to get extra money called for in the new report, NIH Director Francis Collins told reporters. “It won’t be fast, it won’t be easy, and it won’t be cheap,” he says. Regardless, Collins, who commissioned the new report to guide his agency’s role in the initiative, embraced the plan wholeheartedly:

86 billion neurons take note: I’ve accepted a scientific vision for #BRAINI that will transform neuroscience: http://t.co/12xluad54U #NIH

— Francis S. Collins (@NIHDirector) June 5, 2014

The report lays out a 10- to 12-year plan for investing $300 million to $500 million per year to develop new tools to monitor and map brain activity and structure, beginning in fiscal year 2016. It suggests focusing on tool development for the first 5 to 6 years, then ramping up funding as new techniques come online. A key goal is to produce cheaper, more accessible tools that all researchers can use without needing special training, so that the overall cost of doing neuroscience research goes down over time, Bargmann says.

The panel acknowledges the uncertainty of their cost estimate. “While we did not conduct a detailed cost analysis, we considered the scope of the questions to be addressed by the initiative, and the cost of programs that have developed in related areas over recent years. Thus our budget estimates, while provisional, are informed by the costs of real neuroscience at this technological level,” the group writes.

The first round of requests for NIH grant applications already went out last fall, and awardees will be announced in September, according to Collins. Additional opportunities to apply for NIH funding will open up by fall, based on this new, more detailed report, he says. Researchers planning to apply “may now consider that [the report] is a blueprint of where we want to go,” Collins added.

*Correction, 10 June, 12:17 p.m.: This article has been corrected to reflect that the $4.5 billion proposed price tag for the BRAIN initiative refers only to NIH’s portion of the project, not all funding. – Science Mag.

Advisory Committee to the Director, Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative Working Group

The National Institutes of Health (NIH) convened a BRAIN Working Group of the Advisory Committee to the Director, NIH, to develop a rigorous plan for achieving this scientific vision. This report presents the findings and recommendations of the working group, including the scientific background and rationale for The BRAIN Initiative® as a whole and for each of seven major goals articulated in the report. In addition, we include specific deliverables, timelines, and cost estimates for these goals as requested by the NIH Director. Read more in the BRAIN 2025 Report.

As the NIH BRAIN Initiative rapidly approached its halfway point, the ACD BRAIN Initiative Working Group 2.0 was asked to assess BRAIN’s progress and advances within the context of the original BRAIN 2025 report, identify key opportunities to apply new and emerging tools to revolutionize our understanding of brain circuits, and designate valuable areas of continued technology development. Alongside, the BRAIN Neuroethics Subgroup was tasked with considering the ethical implications of ongoing research and forecasting what the future of BRAIN advancements might entail, crafting a neuroethics “roadmap” for the Initiative. Read more in the BRAIN 2.0 companion reports (BRAIN Initiative 2.0 report and Neuroethics report).

2017
2019

Brain-to-brain communication demo receives DARPA funding

JADE BOYD – JANUARY 25, 2021

Wireless linkage of brains may soon go to human testing

Wireless communication directly between brains is one step closer to reality thanks to $8 million in Department of Defense follow-up funding for Rice University neuroengineers.

The Defense Advanced Research Projects Agency (DARPA), which funded the team’s proof-of-principle research toward a wireless brain link in 2018, has asked for a preclinical demonstration of the technology that could set the stage for human tests as early as 2022.

“We started this in a very exploratory phase,” said Rice’s Jacob Robinson, lead investigator on the MOANA Project, which ultimately hopes to create a dual-function, wireless headset capable of both “reading” and “writing” brain activity to help restore lost sensory function, all without the need for surgery.

MOANA, which is short for “magnetic, optical and acoustic neural access,” will use light to decode neural activity in one brain and magnetic fields to encode that activity in another brain, all in less than one-twentieth of a second.

“We spent the last year trying to see if the physics works, if we could actually transmit enough information through a skull to detect and stimulate activity in brain cells grown in a dish,” said Robinson, an associate professor of electrical and computer engineering and core faculty member of the Rice Neuroengineering Initiative.

Jacob Robinson (Photo by Tommy LaVergne/Rice University)

“What we’ve shown is that there is promise,” he said. “With the little bit of light that we are able to collect through the skull, we were able to reconstruct the activity of cells that were grown in the lab. Similarly, we showed we could stimulate lab-grown cells in a very precise way with magnetic fields and magnetic nanoparticles.”

Robinson, who’s orchestrating the efforts of 16 research groups from four states, said the second round of DARPA funding will allow the team to “develop this further into a system and to demonstrate that this system can work in a real brain, beginning with rodents.”

If the demonstrations are successful, he said the team could begin working with human patients within two years.

“Most immediately, we’re thinking about ways we can help patients who are blind,” Robinson said. “In individuals who have lost the ability to see, scientists have shown that stimulating parts of the brain associated with vision can give those patients a sense of vision, even though their eyes no longer work.”

The MOANA team includes 15 co-investigators from Rice, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University, the Massachusetts Institute of Technology and Yale’s John B. Pierce Laboratory.

The project is funded through DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program. – RICE University

The BRAIN Initiative has never been concluded. We’re living it now.

Silview.media

SPOOKY FIBERS IN MAKS AND TEST SWABS? WAIT ’TIL YOU READ THE SCIENCE!

Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way.
We hardly made it before, but this summer something’s going on, our audience stats show bizarre patterns, we’re severely under estimates and the last savings are gone. We’re not your responsibility, but if you find enough benefits in this work…
Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

We gave up on our profit shares from masks, if you want to help us, please use the donation button!
We think frequent mask use, even short term use can be bad for you, but if you have no way around them, at least send a message of consciousness.
Get it here!