We need to speed up our little awakening because we’re still light-years behind the reality. This dwarfs Afghanistan and Covid is but a chapter in its playbook. This connects all the trigger-words: 5G, Covid, Vaccines, Graphene, The Great Reset, Blockchain, The Fourth Industrial Revolution and beyond.
A wide variety of internet-connected “smart” devices now promise consumers and businesses improved performance, convenience, efficiency, and fun. Within this broader Internet of Things (IoT) lies a growing industry of devices that monitor the human body, collect health and other personal information, and transmit that data over the internet. We refer to these emerging technologies and the data they collect as the Internet of Bodies (IoB) (see, for example, Neal, 2014; Lee, 2018), a term first applied to law and policy in 2016 by law and engineering professor Andrea M. Matwyshyn (Atlantic Council, 2017; Matwyshyn, 2016; Matwyshyn, 2018; Matawyshyn, 2019). IoB devices come in many forms. Some are already in wide use, such as wristwatch fitness monitors or pacemakers that transmit data about a patient’s heart directly to a cardiologist. Other products that are under development or newly on the market may be less familiar, such as ingestible products that collect and send information on a person’s gut, microchip implants, brain stimulation devices, and internet-connected toilets. These devices have intimate access to the body and collect vast quantities of personal biometric data. IoB device makers promise to deliver substantial health and other benefits but also pose serious risks, including risks of hacking, privacy infringements, or malfunction. Some devices, such as a reliable artificial pancreas for diabetics, could revolutionize the treatment of disease, while others could merely inflate health-care costs with little positive effect on outcomes. Access to huge torrents of live-streaming biometric data might trigger breakthroughs in medical knowledge or behavioral understanding. It might increase health outcome disparities, where only people with financial means have access to any of these benefits. Or it might enable a surveillance state of unprecedented intrusion and consequence. There is no universally accepted definition of the IoB.1 For the purposes of this report, we refer to the IoB, or the IoB ecosystem, as IoB devices (defined next, with further explanation in the passages that follow) together with the software they contain and the data they collect.
An IoB device is defined as a device that • contains software or computing capabilities • can communicate with an internet-connected device or network and satisfies one or both of the following: • collects person-generated health or biometric data • can alter the human body’s function. The software or computing capabilities in an IoB device may be as simple as a few lines of code used to configure a radio frequency identification (RFID) microchip implant, or as complex as a computer that processes artificial intelligence (AI) and machine learning algorithms. A connection to the internet through cellular or Wi-Fi networks is required but need not be a direct connection. For example, a device may be connected via Bluetooth to a smartphone or USB device that communicates with an internet-connected computer. Person-generated health data (PGHD) refers to health, clinical, or wellness data collected by technologies to be recorded or analyzed by the user or another person. Biometric or behavioral data refers to measurements of unique physical or behavioral properties about a person. Finally, an alteration to the body’s function refers to an augmentation or modification of how the user’s body performs, such as a change in cognitive enhancement and memory improvement provided by a brain-computer interface, or the ability to record whatever the user sees through an intraocular lens with a camera. IoB devices generally, but not always, require a physical connection to the body (e.g., they are worn, ingested, implanted, or otherwise attached to or embedded in the body, temporarily or permanently). Many IoB devices are medical devices regulated by the U.S. Food and Drug Administration (FDA).3 Figure 1 depicts examples of technologies in the IoB ecosystem that are either already available on the U.S. market or are under development. Devices that are not connected to the internet, such as ordinary heart monitors or medical ID bracelets, are not included in the definition of IoB. Nor are implanted magnets (a niche consumer product used by those in the so-called bodyhacker community described in the next section) that are not connected to smartphone applications (apps), because although they change the body’s functionality by allowing the user to sense electromagnetic vibrations, the devices do not contain software. Trends in IoB technologies and additional examples are further discussed in the next section. Some IoB devices may fall in and out of our definition at different times. For example, a Wi-Fi-connected smartphone on its own would not be part of the IoB; however, once a health app is installed that requires connection to the body to track user information, such as heart rate or number of steps taken, the phone would be considered IoB. Our definition is meant to capture rapidly evolving technologies that have the potential to bring about the various risks and benefits that are discussed in this report. We focused on analyzing existing and emerging IoB technologies that appear to have the potential to improve health and medical outcomes, efficiency, and human function or performance, but that could also endanger users’ legal, ethical, and privacy rights or present personal or national security risks. For this research, we conducted an extensive literature review and interviewed security experts, technology developers, and IoB advocates to understand anticipated risks and benefits. We had valuable discussions with experts at BDYHAX 2019, an annual convention for bodyhackers, in February 2019, and DEFCON 27, one of the world’s largest hacker conferences, in August 2019. In this report, we discuss trends in the technology landscape and outline the benefits and risks to the user and other stakeholders. We present the current state of governance that applies to IoB devices and the data they collect and conclude by offering recommendations for improved regulation to best balance those risks and rewards.
…
Operation Warp Speed logo
Transhumanism, Bodyhacking, Biohacking, and More
The IoB is related to several movements outside of formal health care focused on integrating human bodies with technology. Next, we summarize some of these concepts, though there is much overlap and interchangeability among them. Transhumanism is a worldview and political movement advocating for the transcendence of humanity beyond current human capabilities. Transhumanists want to use technology, such as artificial organs and other techniques, to halt aging and achieve “radical life extension” (Vita-Moore, 2018). Transhumanists may also seek to resist disease, enhance their intelligence, or thwart fatigue through diet, exercise, supplements, relaxation techniques, or nootropics (substances that may improve cognitive function). Bodyhackers, biohackers, and cyborgs, who enjoy experimenting with body enhancement, often refer to themselves as grinders. They may or may not identify as transhumanists. These terms are often interchanged in common usage, but some do distinguish between them (Trammell, 2015). Bodyhacking generally refers to modifying the body to enhance one’s physical or cognitive abilities. Some bodyhacking is purely aesthetic. Hackers have implanted horns in their heads and LED lights under their skin. Other hacks, such as implanting RFID microchips in one’s hand, are meant to enhance function, allowing users to unlock doors, ride public transportation, store emergency contact information, or make purchases with the sweep of an arm (Baenen, 2017; Savage, 2018). One bodyhacker removed the RFID microchip from her car’s key fob and had it implanted in her arm (Linder, 2019). A few bodyhackers have implanted a device that is a combined wireless router and hard drive that can be used as a node in a wireless mesh network (Oberhaus, 2019). Some bodyhacking is medical in nature, including 3D-printed prosthetics and do-it-yourself artificial pancreases. Still others use the term for any method of improving health, including bodybuilding, diet, or exercise. Biohacking generally denotes techniques that modify the biological systems of humans or other living organisms. This ranges from bodybuilding and nootropics to developing cures for diseases via self-experimentation to human genetic manipulation through CRISPR-Cas9 techniques (Samuel, 2019; Griffin, 2018). Cyborgs, or cybernetic organisms, are people who have used machines to enhance intelligence or the senses. Neil Harbisson, a colorblind man who can “hear” color through an antenna implanted in his head that plays a tune for different colors or wavelengths of light, is acknowledged as the first person to be legally recognized by a government as a cyborg, by being allowed to have his passport picture include his implant (Donahue, 2017). Because IoB is a wide-ranging field that intersects with do-it-yourself body modification, consumer products, and medical care, understanding its benefits and risks is critical.
The Internet of Bodies is here. This is how it could change our lives
04 Jun 2020, Xiao Liu Fellow at the Centre for the Fourth Industrial Revolution, World Economic Forum
We’re entering the era of the “Internet of Bodies”: collecting our physical data via a range of devices that can be implanted, swallowed or worn.
The result is a huge amount of health-related data that could improve human wellbeing around the world, and prove crucial in fighting the COVID-19 pandemic.
But a number of risks and challenges must be addressed to realize the potential of this technology, from privacy issues to practical hurdles.
In the special wards of Shanghai’s Public Health Clinical Center, nurses use smart thermometers to check the temperatures of COVID-19 patients. Each person’s temperature is recorded with a sensor, reducing the risk of infection through contact, and the data is sent to an observation dashboard. An abnormal result triggers an alert to medical staff, who can then intervene promptly. The gathered data also allows medics to analyse trends over time.
The smart thermometers are designed by VivaLNK, a Silicon-Valley based startup, and are a powerful example of the many digital products and services that are revolutionizing healthcare. After the Internet of Things, which transformed the way we live, travel and work by connecting everyday objects to the Internet, it’s now time for the Internet of Bodies. This means collecting our physical data via devices that can be implanted, swallowed or simply worn, generating huge amounts of health-related information.
Some of these solutions, such as fitness trackers, are an extension of the Internet of Things. But because the Internet of Bodies centres on the human body and health, it also raises its own specific set of opportunities and challenges, from privacy issues to legal and ethical questions.
Image: McKinsey & Company
Connecting our bodies
As futuristic as the Internet of Bodies may seem, many people are already connected to it through wearable devices. The smartwatch segment alone has grown into a $13 billion market by 2018, and is projected to increase another 32% to $18 billion by 2021. Smart toothbrushes and even hairbrushes can also let people track patterns in their personal care and behaviour.
For health professionals, the Internet of Bodies opens the gate to a new era of effective monitoring and treatment.
In 2017, the U.S. Federal Drug Administration approved the first use of digital pills in the United States. Digital pills contain tiny, ingestible sensors, as well as medicine. Once swallowed, the sensor is activated in the patient’s stomach and transmits data to their smartphone or other devices.
In 2018, Kaiser Permanente, a healthcare provider in California, started a virtual rehab program for patients recovering from heart attacks. The patients shared their data with their care providers through a smartwatch, allowing for better monitoring and a closer, more continuous relationship between patient and doctor. Thanks to this innovation, the completion rate of the rehab program rose from less than 50% to 87%, accompanied by a fall in the readmission rate and programme cost.
The deluge of data collected through such technologies is advancing our understanding of how human behaviour, lifestyle and environmental conditions affect our health. It has also expanded the notion of healthcare beyond the hospital or surgery and into everyday life. This could prove crucial in fighting the coronavirus pandemic. Keeping track of symptoms could help us stop the spread of infection, and quickly detect new cases. Researchers are investigating whether data gathered from smartwatches and similar devices can be used as viral infection alerts by tracking the user’s heart rate and breathing.
At the same time, this complex and evolving technology raises new regulatory challenges.
What counts as health information?
In most countries, strict regulations exist around personal health information such as medical records and blood or tissue samples. However, these conventional regulations often fail to cover the new kind of health data generated through the Internet of Bodies, and the entities gathering and processing this data.
In the United States, the 1996 Health Insurance Portability and Accountability Act (HIPPA), which is the major law for health data regulation, applies only to medical providers, health insurers, and their business associations. Its definition of “personal health information” covers only the data held by these entities. This definition is turning out to be inadequate for the era of the Internet of Bodies. Tech companies are now also offering health-related products and services, and gathering data. Margaret Riley, a professor of health law at the University of Virginia, pointed out to me in an interview that HIPPA does not cover the masses of data from consumer wearables, for example.
Another problem is that the current regulations only look at whether the data is sensitive in itself, not whether it can be used to generate sensitive information. For example, the result of a blood test in a hospital will generally be classified as sensitive data, because it reveals private information about your personal health. But today, all sorts of seemingly non-sensitive data can also be used to draw inferences about your health, through data analytics. Glenn Cohen, a professor at Harvard Law school, told me in an interview that even data that is not about health at all, such as grocery shopping lists, can be used for such inferences. As a result, conventional regulations may fail to cover data that is sensitive and private, simply because it did not look sensitive before it was processed.
Data risks
Identifying and protecting sensitive data matters, because it can directly affect how we are treated by institutions and other people. With big data analytics, countless day-to-day actions and decisions can ultimately feed into our health profile, which may be created and maintained not just by traditional healthcare providers, but also by tech companies or other entities. Without appropriate laws and regulations, it could also be sold. At the same time, data from the Internet of Bodies can be used to make predictions and inferences that could affect a person’s or group’s access to resources such as healthcare, insurance and employment.
James Dempsey, director of the Berkeley Center for Law and Technology, told me in an interview that this could lead to unfair treatment. He warned of potential discrimination and bias when such data is used for decisions in insurance and employment. The affected people may not even be aware of this.
One solution would be to update the regulations. Sandra Wachter and Brent Mittelstadt, two scholars at the Oxford Internet Institute, suggest that data protection law should focus more on how and why data is processed, and not just on its raw state. They argue for a so-called “right to reasonable inferences”, meaning the right to have your data used only for reasonable, socially acceptable inferences. This would involve setting standards on whether and when inferring certain information from a person’s data, including the state of their present or future health, is socially acceptable or overly invasive.
Practical problems
Apart from the concerns over privacy and sensitivity, there are also a number of practical problems in dealing with the sheer volume of data generated by the Internet of Bodies. The lack of standards around security and data processing makes it difficult to combine data from diverse sources, and use it to advance research. Different countries and institutions are trying to jointly overcome this problem. The Institute of Electrical and Electronics Engineers (IEEE) and its Standards Association have been working with the US Food & Drug Administration (FDA), National Institutes of Health, as well as universities and businesses among other stakeholders since 2016, to address the security and interoperability issue of connected health.
As the Internet of Bodies spreads into every aspect of our existence, we are facing a range of new challenges. But we also have an unprecedented chance to improve our health and well-being, and save countless lives. During the COVID-19 crisis, using this opportunity and finding solutions to the challenges is a more urgent task than ever. This relies on government agencies and legislative bodies working with the private sector and civil society to create a robust governance framework, and to include inferences in the realm of data protection. Devising technological and regulatory standards for interoperability and security would also be crucial to unleashing the power of the newly available data. The key is to collaborate across borders and sectors to fully realize the enormous benefits of this rapidly advancing technology.
Governance of IoB devices is managed through a patchwork of state and federal agencies, nonprofit organizations, and consumer advocacy groups
The primary entities responsible for governance of IoB devices are the FDA and the U.S. Department of Commerce.
Although the FDA is making strides in cybersecurity of medical devices, many IoB devices, especially those available for consumer use, do not fall under FDA jurisdiction.
Federal and state officials have begun to address cybersecurity risks associated with IoB that are beyond FDA oversight, but there are few laws that mandate cybersecurity best practices.
As with IoB devices, there is no single entity that provides oversight to IoB data
Protection of medical information is regulated at the federal level, in part, by HIPAA.
The Federal Trade Commission (FTC) helps ensure data security and consumer privacy through legal actions brought by the Bureau of Consumer Protection.
Data brokers are largely unregulated, but some legal experts are calling for policies to protect consumers.
As the United States has no federal data privacy law, states have introduced a patchwork of laws and regulations that apply to residents’ personal data, some of which includes IoB-related information.
The lack of consistency in IoB laws among states and between the state and federal level potentially enables regulatory gaps and enforcement challenges.
Recommendations
The U.S. Commerce Department can put foreign IoB companies on its “Entity List,” preventing them from doing business with Americans, if those foreign companies are implicated in human rights violations.
As 5G, Wi-Fi 6, and satellite internet standards are rolled out, the federal government should be prepared for issues by funding studies and working with experts to develop security regulations.
It will be important to consider how to incentivize quicker phase-out of the legacy medical devices with poor cybersecurity that are already in wide use.
IoB developers must be more attentive to cybersecurity by integrating cybersecurity and privacy considerations from the beginning of product development.
Device makers should test software for vulnerabilities often and devise methods for users to patch software.
Congress should consider establishing federal data transparency and protection standards for data that are collected from the IoB.
The FTC could play a larger role to ensure that marketing claims about improved well-being or specific health treatment are backed by appropriate evidence.
JAMMU and Kashmir is almost always in the news for one reason or another. Apart from the obvious political headlines, J&K was also in the news because of covid-19. As the world struggled with covid-19 pandemic, J&K faced a peculiar situation due to its poor health infrastructure. Nonetheless, all sections of society did a commendable job in keeping covid under control and preventing the loss of life as much as possible. The doctors Association in Kashmir along with the administration did as much as possible through their efforts. For that we are all thankful to them. However, it is about time that we integrate our Healthcare System by upgrading it and introducing to it new technologies from the current world.
We’ve all heard of the Internet of Things, a network of products ranging from refrigerators to cars to industrial control systems that are connected to the internet. Internet of Bodies (IoB) the outcome of the Internet of Things (IoT) is broadly helping the healthcare system and every individual to live life with ease by managing the human body in terms of technology. The Internet of Bodies connects the human body to a network of internet run devices.
The use of IoB can be independent or by the health care heroes (doctors) to monitor, report and enhance the health system of the human body. The internet of Bodies (IoB) are broadly classified into three categories or in some cases we can say three generations – Body Internal, Body External and Body embedded. The Body Internal model of IoB is the category, in which the individual or patient is interacting with the technology environment or we can say internet or our healthcare system by having an installed device inside the human body. Body External model or generation of IoB signifies the model where the device is installed external to the body for certain usage viz. Apple watches and other smart bands from various OEM’s for tracking blood pressure, heart rate etc which can later be used for proper health tracking and monitoring purposes. Last one under this classifications are Body Embedded, in which the devices are embedded under the skin by health care professionals during a number of health situations.
The Internet of Bodies is a small part or even the offspring of the Internet of Things. Much like it, there remains the challenge of data and information breach as we have already witnessed many excessive distributed denial of service (DDos) attacks and other cyber-attacks on IoTs to exploit data and gather information. The effects are even more severe and vulnerable in the case of the Internet of Bodies as the human body is involved in this schema.
The risk of these threats has taken over the discussion about the IOBs. Thus, this has become a great concern in medical technology companies. Most of the existing IoB companies just rely on end-user license agreements and privacy policies to retain rights in software and to create rights to monitor, aggregate and share users’ body data. They just need to properly enhance the security model and implement high security measures to avoid any misfortune. For the same the Government of India is already examining the personal data protection bill 2019.
The Internet has not managed to change our lifestyles in the way the internet of things will!
Views expressed in the article are the author’s own and do not necessarily represent the editorial stance of Kashmir Observer
The author is presently Manager IT & Ops In HK Group
Social media, sensor feeds, and scientific studies generate large amounts of valuable data. However, understanding the relationships among this data can be challenging. Graph analytics has emerged as an approach by which analysts can efficiently examine the structure of the large networks produced from these data sources and draw conclusions from the observed patterns. By understanding the complex relationships both within and between data sources, a more complete picture of the analysis problem can be understood. With lessons learned from innovations in the expanding realm of deep neural networks, the Hierarchical Identify Verify Exploit (HIVE) program seeks to advance the arena of graph analytics.
The HIVE program is looking to build a graph analytics processor that can process streaming graphs 1000X faster and at much lower power than current processing technology. If successful, the program will enable graph analytics techniques powerful enough to solve tough challenges in cyber security, infrastructure monitoring and other areas of national interest. Graph analytic processing that currently requires racks of servers could become practical in tactical situations to support front-line decision making. What ’s more, these advanced graph analytics servers could have the power to analyze the billion- and trillion-edge graphs that will be generated by the Internet of Things, ever-expanding social networks, and future sensor networks.
In parallel with the hardware development of a HIVE processor, DARPA is working with MIT Lincoln Laboratory and Amazon Web Services (AWS) to host the HIVE Graph Challenge with the goal of developing a trillion-edge dataset. This freely available dataset will spur innovative software and hardware solutions in the broader graph analysis community that will contribute to the HIVE program.
The overall objective is to accelerate innovation in graph analytics to open new pathways for meeting the challenge of understanding an ever-increasing torrent of data. The HIVE program features two primary challenges:
The first is a static graph problem focused on sub-graph Isomorphism. This task is to further the ability to search a large graph in order to identify a particular subsection of that graph.
The second is a dynamic graph problem focused on trying to find optimal clusters of data within the graph.
Both challenges will include a small graph problem in the billions of nodes and a large graph problem in the trillions of nodes.
Transhuman Code authors discuss digital ID’s and a centralized AI-controlled society. In 2018 More info
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
Graphene is the new asbestos. Plus injectable and mandatory. The rest Of the graphene oxide story is here, if you need more background, this post is a result of that investigation
NOTE: A needed clarification solicited by some readers: Yes, we knew of GRAPHENE COATING on masks in May, as seen below, which is horrible enough, even more so since not many followed Canada’s example in banning it. What this article brings new is a confirmation for GRAPHENE OXYDE, which is not very different in properties and health impact, but seems to be specific to these mRNA jabs, and so we complete the new revelations on graphene oxide and vaccines from La Quinta Columna.
In December 2019, a novel coronavirus (SARS-CoV-2) was first detected in Wuhan, in China’s Hubei province. On 11 March 2020, the World Health Organization (WHO) acknowledged and characterized the condition as a pandemic owing to the rapid spread of the virus across the globe infecting millions of individuals. Scientists are fighting tirelessly to find out ways to curb the spread of the virus and eradicate it.
SARS-CoV-2 is regarded as highly contagious and spreads rapidly through person-to-person contact. When an infected person sneezes or coughs, their respiratory droplets can easily infect a healthy individual. Besides enforcing social distancing, common citizens are encouraged to wear face masks to prevent droplets from getting through the air and infecting others.
Despite the efficiency of N95, a respiratory protective device, to filter out 95% of particles (≥0.3 μm), surgical facemasks are single-use, expensive, and often ill-fitting, which significantly reduces their effectiveness. Nanoscience researchers have envisioned a new respirator facemask that would be highly efficient, recyclable, customizable, reusable, and have antimicrobial and antiviral properties.
Nanotechnology in the Production of Surgical Masks
Nanoparticles are extensively used for their novel properties in various fields of science and technology.
In the current pandemic situation, scientists have adopted this technology to produce the most efficient masks. Researchers have used a novel electrospinning technology in the production of nanofiber membranes. These nanofiber membranes are designed to have various regulating properties such as fiber diameter, porosity ratio, and many other microstructural factors that could be utilized to produce high-quality face masks. Researchers in Egypt have developed face masks using nanotechnology with the help of the following components:
Polylactic acid
This transparent polymeric material is derived from starch and carbohydrate. It has high elasticity and is biodegradable. Researchers found that electrospun polylactic acid membranes possess high prospects for the production of filters efficient in the isolation of environmental pollutants, such as atmospheric aerosol and submicron particulates dispersed in the air.
Despite its various biomedical applications (implant prostheses, catheters, tissue scaffolds, etc.), these polylactic membranes are brittle. Therefore, applying frequent pressure during their usage could produce cracks that would make them permeable to viral particles. However, this mechanical drawback can be fixed using other supportive nanoparticles that could impart mechanical strength, antimicrobial and antiviral properties, which are important in making face masks effective in the current pandemic situation.
Copper oxide nanoparticles
These nanoparticles have many biomedical applications, for example, infection control, as they can inhibit the growth of microorganisms (fungi, bacteria) and viruses. It has also been reported that SARS-CoV-2 has lower stability on the metallic copper surface than other materials, such as plastic or stainless steel. Therefore, the integration of copper oxide nanoparticles in a nanofibrous polymeric filtration system would significantly prevent microbial adherence onto the membrane.
Graphene oxide nanoparticles
These nanoparticles possess exceptional properties, such as high toughness, superior electrical conductivity, biocompatibility, and antiviral and antibacterial activity. Such nanoparticles could be utilized in the production of masks.
Cellulose acetate
This is a semi-synthetic polymer derived from cellulose. It is used in ultrafiltration because of its biocompatibility, high selectivity, and low cost. It is also used in protective clothing, tissue engineering, and nanocomposite applications.
With the help of the aforesaid components, researchers in Egypt have designed a novel respirator filter mask against SARS-CoV-2. This mask is based on a disposable filter piece composed of the unwoven nanofibers comprising multilayers of a) copper oxide nanoparticles, graphene oxide nanoparticles, and polylactic acid, or b) copper oxide nanoparticles, graphene oxide nanoparticles, and cellulose acetate, with the help of electrospun technology and high-power ultrasonication. These facemasks are reusable, i.e., washable in water and could be sterilized using an ultraviolet lamp (λ = 250 nm).
SOURCE WORKING TO GET CONFIRMATION FROM THESE GUYS TOOSOURCE
Graphene-coated face masks: COVID-19 miracle or another health risk?
As a COVID-19 and medical device researcher, I understand the importance of face masks to prevent the spread of the coronavirus. So I am intrigued that some mask manufacturers have begun adding graphene coatings to their face masks to inactivate the virus. Many viruses, fungi and bacteria are incapacitated by graphene in laboratory studies, including feline coronavirus.
Because SARS CoV-2, the coronavirus that causes COVID-19, can survive on the outer surface of a face mask for days, people who touch the mask and then rub their eyes, nose, or mouth may risk getting COVID-19. So these manufacturers seem to be reasoning that graphene coatings on their reusable and disposable face masks will add some anti-virus protection. But in March, the Quebec provincial government removed these masks from schools and daycare centers after Health Canada, Canada’s national public health agency, warned that inhaling the graphene could lead to asbestos-like lung damage.
Is this move warranted by the facts, or an over-reaction? To answer that question, it can help to know more about what graphene is, how it kills microbes, including the SARS-COV-2 virus, and what scientists know so far about the potential health impacts of breathing in graphene.
How does graphene damage viruses, bacteria and human cells?
Graphene is a thin but strong and conductive two-dimensional sheet of carbon atoms. There are three ways that it can help prevent the spread of microbes:
Microscopic graphene particles have sharp edges that mechanically damage viruses and cells as they pass by them.
Graphene is negatively charged with highly mobile electrons that electrostaticly trap and inactivate some viruses and cells.
Dr Joe Schwarcz explains why Canada banned graphene masks. Doesn’t say why other countries didn’t. When two governments have opposing views on a poison, one is criminally wrong and someone has to pay.
Why graphene may be linked to lung injury
Researchers have been studying the potential negative impacts of inhaling microscopic graphene on mammals. In one 2016 experiment, mice with graphene placed in their lungs experienced localized lung tissue damage, inflammation, formation of granulomas (where the body tries to wall off the graphene), and persistent lung injury, similar to what occurs when humans inhale asbestos. A different study from 2013 found that when human cells were bound to graphene, the cells were damaged.
In order to mimic human lungs, scientists have developed biological models designed to simulate the impact of high concentration aerosolized graphene—graphene in the form of a fine spray or suspension in air—on industrial workers. One such study published in March 2020 found that a lifetime of industrial exposure to graphene induced inflammation and weakened the simulated lungs’ protective barrier.
It’s important to note that these models are not perfect options for studying the dramatically lower levels of graphene inhaled from a face mask, but researchers have used them in the past to learn more about these sorts of exposures. A study from 2016 found that a small portion of aerosolized graphene nanoparticles could move down a simulated mouth and nose passages and penetrate into the lungs. A 2018 study found that brief exposure to a lower amount of aerosolized graphene did not notably damage lung cells in a model.
From my perspective as a researcher, this trio of findings suggest that a little bit of graphene in the lungs is likely OK, but a lot is dangerous.
Although it might seem obvious to compare inhaling graphene to the well-known harms of breathing in asbestos, the two substances behave differently in one key way. The body’s natural system for disposing of foreign particles cannot remove asbestos, which is why long-term exposure to asbestos can lead to the cancer mesothelioma. But in studies using mouse models to measure the impact of high dose lung exposure to graphene, the body’s natural disposal system does remove the graphene, although it occurs very slowly over 30 to 90 days.
The findings of these studies shed light on the possible health impacts of breathing in microscopic graphene in either small or large doses. However, these models don’t reflect the full complexity of human experiences. So the strength of the evidence about either the benefit of wearing a graphene mask, or the harm of inhaling microscopic graphene as a result of wearing it, is very weak.
No obvious benefit but theoretical risk
Graphene is an intriguing scientific advance that may speed up the demise of COVID-19 virus particles on a face mask. In exchange for this unknown level of added protection, there is a theoretical risk that breathing through a graphene-coated mask will liberate graphene particles that make it through the other filter layers on the mask and penetrate into the lung. If inhaled, the body may not remove these particles rapidly enough to prevent lung damage.
The health department in Quebec is erring on the side of caution. Children are at very low risk of COVID-19 mortality or hospitalization, although they may infect others, so the theoretical risk from graphene exposure is too great. However, adults at high immediate risk of harm from contracting COVID-19 may choose to accept a small theoretical risk of long-term lung damage from graphene in exchange for these potential benefits.
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
The Development, Concepts and Doctrine Centre (DCDC) has worked in partnership with the German Bundeswehr Office for Defence Planning to understand the future implications of human augmentation (HA), setting the foundation for more detailed Defence research and development.
The project incorporates research from German, Swedish, Finnish and UK Defence specialists to understand how emerging technologies such as genetic engineering, bioinformatics and the possibility of brain-computer interfaces could affect the future of society, security and Defence. The ethical, moral and legal challenges are complex and must be thoroughly considered, but HA could signal the coming of a new era of strategic advantage with possible implications across the force development spectrum.
HA technologies provides a broad sense of opportunities for today and in the future. There are mature technologies that could be integrated today with manageable policy considerations, such as personalised nutrition, wearables and exoskeletons. There are other technologies in the future with promises of bigger potential such as genetic engineering and brain-computer interfaces. The ethical, moral and legal implications of HA are hard to foresee but early and regular engagement with these issues lie at the heart of success.
HA will become increasingly relevant in the future because it is the binding agent between the unique skills of humans and machines. The winners of future wars will not be those with the most advanced technology, but those who can most effectively integrate the unique skills of both human and machine.
The growing significance of human-machine teaming is already widely acknowledged but this has so far been discussed from a technology-centric perspective. This HA project represents the missing part of the puzzle.
Disclaimer
The content of this publication does not represent the official policy or strategy of the UK government or that of the UK’s Ministry of Defense (MOD).
Furthermore, the analysis and findings do not represent the official policy or strategy of the countries contributing to the project.
It does, however, represent the view of the Development, Concepts and Doctrine Centre (DCDC), a department within the UK MOD, and Bundeswehr Office for Defence Planning (BODP), a department within the German Federal Ministry of Defence. It is based on combining current knowledge and wisdom from subject matter experts with assessments of potential progress in technologies 30 years out supporting deliberations and deductions for future humans and society. Published 13 May 2021 – UK DEFENSE WEBSITE
That disclaimer is a load of bollocks that means nothing, really, but covers the Ministry from some legal liabilities, just in case. You can totally ignore it. – Silview.media
The US Department of Defense has something similar going on, but it doesn’t target the general population in presentations. However, if you input “DARPA” in our search utility, you find out DoD has been going same direction for decades.
At least US has the decency to pretend these are for military use only, I know they all are meant to be used on the general population, but I don’t know any other open admission of civillian use before.
Does this guy shock you that much now, or does he fall in line like the perfect Tetris piece that he is, “another brick in the wall”?
Now remember mRNA therapies are “information therapies” and these injections are the perfect tools for achieving the above goals.
Anyone remember the plebs ever being consulted on their future evolution, or are they just SUBJECTED to it, like slaves to selective breeding?!
You read this because some of my readers are generous enough to help us survive, and at least as hungry for truth as we are, basically the best readers I could hope for. Such as Corinne, who we should thank for pulling my sleeve about this one! If you’re on Gab (which you should), follow her, she has tons of great info to share every day!
DEVELOPING STORY, TO BE CONTINUED, SO BE BACK HERE SOON
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
For years, the Pentagon tried to convince the public that they work on your dream secretary. Can you believe that? Funny how much those plans looked just like today’s Google and Facebook. But it’s not just the looks, it’s also the money, the timeline and the personal connections. Funnier how the funding scheme was often similar to the one used for Wuhan, with proxy organizations used as middlemen.
IT’S A MEMORY aid! A robotic assistant! An epidemic detector! An all-seeing, ultra-intrusive spying program!
The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable.
What national security experts and civil libertarians want to know is, why would the Defense Department want to do such a thing?
The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read.
All of this — and more — would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health.
This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life,” to see exactly how a relationship or events developed, according to a briefing from the Defense Advanced Projects Research Agency, LifeLog’s sponsor.
Someone with access to the database could “retrieve a specific thread of past transactions, or recall an experience from a few seconds ago or from many years earlier … by using a search-engine interface.”
On the surface, the project seems like the latest in a long line of DARPA’s “blue sky” research efforts, most of which never make it out of the lab. But DARPA is currently asking businesses and universities for research proposals to begin moving LifeLog forward. And some people, such as Steven Aftergood, a defense analyst with the Federation of American Scientists, are worried.News of the future, now.
With its controversial Total Information Awareness database project, DARPA already is planning to track all of an individual’s “transactional data” — like what we buy and who gets our e-mail.
While the parameters of the project have not yet been determined, Aftergood said he believes LifeLog could go far beyond TIA’s scope, adding physical information (like how we feel) and media data (like what we read) to this transactional data.
“LifeLog has the potential to become something like ‘TIA cubed,'” he said.
In the private sector, a number of LifeLog-like efforts already are underway to digitally archive one’s life — to create a “surrogate memory,” as minicomputer pioneer Gordon Bell calls it.
Bell, now with Microsoft, scans all his letters and memos, records his conversations, saves all the Web pages he’s visited and e-mails he’s received and puts them into an electronic storehouse dubbed MyLifeBits.
DARPA’s LifeLog would take this concept several steps further by tracking where people go and what they see.
That makes the project similar to the work of University of Toronto professor Steve Mann. Since his teen years in the 1970s, Mann, a self-styled “cyborg,” has worn a camera and an array of sensors to record his existence. He claims he’s convinced 20 to 30 of his current and former students to do the same. It’s all part of an experiment into “existential technology” and “the metaphysics of free will.”
DARPA isn’t quite so philosophical about LifeLog. But the agency does see some potential battlefield uses for the program.
“The technology could allow the military to develop computerized assistants for war fighters and commanders that can be more effective because they can easily access the user’s past experiences,” DARPA spokeswoman Jan Walker speculated in an e-mail.
It also could allow the military to develop more efficient computerized training systems, she said: Computers could remember how each student learns and interacts with the training system, then tailor the lessons accordingly.
John Pike, director of defense think tank GlobalSecurity.org, said he finds the explanations “hard to believe.”
“It looks like an outgrowth of Total Information Awareness and other DARPA homeland security surveillance programs,” he added in an e-mail.
Sure, LifeLog could be used to train robotic assistants. But it also could become a way to profile suspected terrorists, said Cory Doctorow, with the Electronic Frontier Foundation. In other words, Osama bin Laden’s agent takes a walk around the block at 10 each morning, buys a bagel and a newspaper at the corner store and then calls his mother. You do the same things — so maybe you’re an al Qaeda member, too!
“The more that an individual’s characteristic behavior patterns — ‘routines, relationships and habits’ — can be represented in digital form, the easier it would become to distinguish among different individuals, or to monitor one,” Aftergood, the Federation of American Scientists analyst, wrote in an e-mail.
In its LifeLog report, DARPA makes some nods to privacy protection, like when it suggests that “properly anonymized access to LifeLog data might support medical research and the early detection of an emerging epidemic.”
But before these grand plans get underway, LifeLog will start small. Right now, DARPA is asking industry and academics to submit proposals for 18-month research efforts, with a possible 24-month extension. (DARPA is not sure yet how much money it will sink into the program.)
The researchers will be the centerpiece of their own study.
Like a game show, winning this DARPA prize eventually will earn the lucky scientists a trip for three to Washington, D.C. Except on this excursion, every participating scientist’s e-mail to the travel agent, every padded bar bill and every mad lunge for a cab will be monitored, categorized and later dissected.
Bending a bit to privacy concerns, the Pentagon changes some of the experiments to be conducted for LifeLog, its effort to record every tidbit of information and encounter in daily life. No video recording of unsuspecting people, for example.
MONDAY IS THE deadline for researchers to submit bids to build the Pentagon’s so-called LifeLog project, an experiment to create an all-encompassing über-diary.
But while teams of academics and entrepreneurs are jostling for the 18- to 24-month grants to work on the program, the Defense Department has changed the parameters of the project to respond to a tide of privacy concerns.
Lifelog is the Defense Advanced Research Projects Agency’s effort to gather every conceivable element of a person’s life, dump it all into a database, and spin the information into narrative threads that trace relationships, events and experiences.
It’s an attempt, some say, to make a kind of surrogate, digitized memory.
“My father was a stroke victim, and he lost the ability to record short-term memories,” said Howard Shrobe, an MIT computer scientist who’s leading a team of professors and researchers in a LifeLog bid. “If you ever saw the movie Memento, he had that. So I’m interested in seeing how memory works after seeing a broken one. LifeLog is a chance to do that.”
Researchers who receive LifeLog grants will be required to test the system on themselves. Cameras will record everything they do during a trip to Washington, D.C., and global-positioning satellite locators will track where they go. Biomedical sensors will monitor their health. All the e-mail they send, all the magazines they read, all the credit card payments they make will be indexed and made searchable.
By capturing experiences, Darpa claims that LifeLog could help develop more realistic computerized training programs and robotic assistants for battlefield commanders.
Defense analysts and civil libertarians, on the other hand, worry that the program is another piece in an ongoing Pentagon effort to keep tabs on American citizens. LifeLog could become the ultimate profiling tool, they fear.
A firestorm of criticism ignited after LifeLog first became public in May. Some potential bidders for the LifeLog contract dropped out as a result.
“I’m interested in LifeLog, but I’m going to shy away from it,” said Les Vogel, a computer science researcher in Maui, Hawaii. “Who wants to get in the middle of something that gets that much bad press?”
New York Times columnist William Safire noted that while LifeLog researchers might be comfortable recording their lives, the people that the LifeLoggers are “looking at, listening to, sniffing or conspiring with to blow up the world” might not be so thrilled about turning over some of their private interchanges to the Pentagon.
In response, Darpa changed the LifeLog proposal request. Now: “LifeLog researchers shall not capture imagery or audio of any person without that person’s a priori express permission. In fact, it is desired that capture of imagery or audio of any person other than the user be avoided even if a priori permission is granted.”
Steven Aftergood, with the Federation of American Scientists, sees the alterations as evidence that Darpa proposals must receive a thorough public vetting.
“Darpa doesn’t spontaneously modify their programs in this way,” he said. “It requires public criticism. Give them credit, however, for acknowledging public concerns.”
While the Pentagon’s project to record and catalog a person’s life scares privacy advocates, researchers see it as a step in the process of getting computers to think like humans.
TO PENTAGON RESEARCHERS, capturing and categorizing every aspect of a person’s life is only the beginning.
LifeLog — the controversial Defense Department initiative to track everything about an individual — is just one step in a larger effort, according to a top Pentagon research director. Personalized digital assistants that can guess our desires should come first. And then, just maybe, we’ll see computers that can think for themselves.
Computer scientists have dreamed for decades of building machines with minds of their own. But these hopes have been overwhelmed again and again by the messy, dizzying complexities of the real world.
In recent months, the Defense Advanced Research Projects Agency has launched a series of seemingly disparate programs — all designed, the agency says, to help computers deal with the complexities of life, so they finally can begin to think.
“Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks. These systems will need to reason, learn and respond intelligently to things they’ve never encountered before,” said Ron Brachman, the recently installed chief of Darpa’s Information Processing Technology Office, or IPTO. A former senior executive at AT&T Labs, Brachman was elected president of the American Association for Artificial Intelligence last year.
LifeLog is the best-known of these projects. The controversial program intends to record everything about a person — what he sees, where he goes, how he feels — and dump it into a database. Once captured, the information is supposed to be spun into narrative threads that trace relationships, events and experiences.
For years, researchers have been able to get programs to make sense of limited, tightly proscribed situations. Navigating outside of the lab has been much more difficult. Until recently, even getting a robot to walk across the room on its own was a tricky task.
“LifeLog is about forcing computers into the real world,” said leading artificial intelligence researcher Doug Lenat, who’s bidding on the project.
What LifeLog is not, Brachman asserts, is a program to track terrorists. By capturing so much information about an individual, and by combing relationships and traits out of that data, LifeLog appears to some civil libertarians to be an almost limitless tool for profiling potential enemies of the state. Concerns over the Terrorism Information Awareness database effort have only heightened sensitivities.
“These technologies developed by the military have obvious, easy paths to Homeland Security deployments,” said Lee Tien, with the Electronic Frontier Foundation.
Brachman said it is “up to military leaders to decide how to use our technology in support of their mission,” but he repeatedly insisted that IPTO has “absolutely no interest or intention of using any of our technology for profiling.”
What Brachman does want to do is create a computerized assistant that can learn about the habits and wishes of its human boss. And the first step toward this goal is for machines to start seeing, and remembering, life like people do.
Human beings don’t dump their experiences into some formless database or tag them with a couple of keywords. They divide their lives into discreet installments — “college,” “my first date,” “last Thursday.” Researchers call this “episodic memory.”
LifeLog is about trying to install episodic memory into computers, Brachman said. It’s about getting machines to start “remembering experiences in the commonsensical way we do — a vacation in Bermuda, a taxi ride to the airport.”
IPTO recently handed out $29 million in research grants to create a Perceptive Assistant that Learns, or PAL, that can draw on these episodes and improve itself in the process. If people keep missing conferences during rush hour, PAL should learn to schedule meetings when traffic isn’t as thick. If PAL’s boss keeps sending angry notes to spammers, the software secretary eventually should just start flaming on its own.
In the 1980s, artificial intelligence researchers promised to create programs that could do just that. Darpa even promoted a thinking “pilot’s associate — a kind of R2D2,” said Alex Roland, author of The Race for Machine Intelligence: Darpa, DoD, and the Strategic Computing Initiative.
But the field “fell on its face,” according to University of Washington computer scientist Henry Kautz. Instead of trying to teach computers how to reason on their own, “we said, ‘Well, if we just keep adding more rules, we could cover every case imaginable.'”
It’s an impossible task, of course. Every circumstance is different, and there will never be enough to stipulations to cover them all.
A few computer programs, with enough training from their human masters, can make some assumptions about new situations on their own, however. Amazon.com’s system for recommending books and music is one of these.
But these efforts are limited, too. Everyone’s received downright kooky suggestions from that Amazon program.
Overcoming these limitations requires a combination of logical approaches. That’s a goal behind IPTO’s new call for research into computers that can handle real-world reasoning.
It’s one of several problems Brachman said are “absolutely imperative” to solve as quickly as possible.
Although computer systems are getting more complicated every day, this complexity “may be actually reversing the information revolution,” he noted in a recent presentation (PDF). “Systems have grown more rigid, more fragile and increasingly open to attack.”
What’s needed, he asserts, is a computer network that can teach itself new capabilities, without having to be reprogrammed every time. Computers should be able to adapt to how its users like to work, spot when they’re being attacked and develop responses to these assaults. Think of it like the body’s immune system — or like a battlefield general.
But to act more like a person, a computer has to soak up its own experiences, like a human being does. It has to create a catalog of its existence. A LifeLog, if you will.
THE PENTAGON CANCELED its so-called LifeLog project, an ambitious effort to build a database tracking a person’s entire existence.
Run by Darpa, the Defense Department’s research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, the plane tickets bought, the e-mail sent and received. Out of this seemingly endless ocean of information, computer scientists would plot distinctive routes in the data, mapping relationships, memories, events and experiences.
LifeLog’s backers said the all-encompassing diary could have turned into a near-perfect digital memory, giving its users computerized assistants with an almost flawless recall of what they had done in the past. But civil libertarians immediately pounced on the project when it debuted last spring, arguing that LifeLog could become the ultimate tool for profiling potential enemies of the state.
Researchers close to the project say they’re not sure why it was dropped late last month. Darpa hasn’t provided an explanation for LifeLog’s quiet cancellation. “A change in priorities” is the only rationale agency spokeswoman Jan Walker gave to Wired News.
However, related Darpa efforts concerning software secretaries and mechanical brains are still moving ahead as planned.
LifeLog is the latest in a series of controversial programs that have been canceled by Darpa in recent months. The Terrorism Information Awareness, or TIA, data-mining initiative was eliminated by Congress — although many analysts believe its research continues on the classified side of the Pentagon’s ledger. The Policy Analysis Market (or FutureMap), which provided a stock market of sorts for people to bet on terror strikes, was almost immediately withdrawn after its details came to light in July.
“I’ve always thought (LifeLog) would be the third program (after TIA and FutureMap) that could raise eyebrows if they didn’t make it clear how privacy concerns would be met,” said Peter Harsha, director of government affairs for the Computing Research Association.
“Darpa’s pretty gun-shy now,” added Lee Tien, with the Electronic Frontier Foundation, which has been critical of many agency efforts. “After TIA, they discovered they weren’t ready to deal with the firestorm of criticism.”
That’s too bad, artificial-intelligence researchers say. LifeLog would have addressed one of the key issues in developing computers that can think: how to take the unstructured mess of life, and recall it as discreet episodes — a trip to Washington, a sushi dinner, construction of a house.
“Obviously we’re quite disappointed,” said Howard Shrobe, who led a team from the Massachusetts Institute of Technology Artificial Intelligence Laboratory which spent weeks preparing a bid for a LifeLog contract. “We were very interested in the research focus of the program … how to help a person capture and organize his or her experience. This is a theme with great importance to both AI and cognitive science.”
To Tien, the project’s cancellation means “it’s just not tenable for Darpa to say anymore, ‘We’re just doing the technology, we have no responsibility for how it’s used.'”
Private-sector research in this area is proceeding. At Microsoft, for example, minicomputer pioneer Gordon Bell’s program, MyLifeBits, continues to develop ways to sort and store memories.
David Karger, Shrobe’s colleague at MIT, thinks such efforts will still go on at Darpa, too.
“I am sure that such research will continue to be funded under some other title,” wrote Karger in an e-mail. “I can’t imagine Darpa ‘dropping out’ of such a key research area.”
MEANWHILE…
Google: seeded by the Pentagon
By dr. Nafeez Ahmed
In 1994 — the same year the Highlands Forum was founded under the stewardship of the Office of the Secretary of Defense, the ONA, and DARPA — two young PhD students at Stanford University, Sergey Brin and Larry Page, made their breakthrough on the first automated web crawling and page ranking application. That application remains the core component of what eventually became Google’s search service. Brin and Page had performed their work with funding from the Digital Library Initiative (DLI), a multi-agency programme of the National Science Foundation (NSF), NASA and DARPA.
Throughout the development of the search engine, Sergey Brin reported regularly and directly to two people who were not Stanford faculty at all: Dr. Bhavani Thuraisingham and Dr. Rick Steinheiser. Both were representatives of a sensitive US intelligence community research programme on information security and data-mining.
Thuraisingham is currently the Louis A. Beecherl distinguished professor and executive director of the Cyber Security Research Institute at the University of Texas, Dallas, and a sought-after expert on data-mining, data management and information security issues. But in the 1990s, she worked for the MITRE Corp., a leading US defense contractor, where she managed the Massive Digital Data Systems initiative, a project sponsored by the NSA, CIA, and the Director of Central Intelligence, to foster innovative research in information technology.
“We funded Stanford University through the computer scientist Jeffrey Ullman, who had several promising graduate students working on many exciting areas,” Prof. Thuraisingham told me. “One of them was Sergey Brin, the founder of Google. The intelligence community’s MDDS program essentially provided Brin seed-funding, which was supplemented by many other sources, including the private sector.”
This sort of funding is certainly not unusual, and Sergey Brin’s being able to receive it by being a graduate student at Stanford appears to have been incidental. The Pentagon was all over computer science research at this time. But it illustrates how deeply entrenched the culture of Silicon Valley is in the values of the US intelligence community.
In an extraordinary document hosted by the website of the University of Texas, Thuraisingham recounts that from 1993 to 1999, “the Intelligence Community [IC] started a program called Massive Digital Data Systems (MDDS) that I was managing for the Intelligence Community when I was at the MITRE Corporation.” The program funded 15 research efforts at various universities, including Stanford. Its goal was developing “data management technologies to manage several terabytes to petabytes of data,” including for “query processing, transaction management, metadata management, storage management, and data integration.”
At the time, Thuraisingham was chief scientist for data and information management at MITRE, where she led team research and development efforts for the NSA, CIA, US Air Force Research Laboratory, as well as the US Navy’s Space and Naval Warfare Systems Command (SPAWAR) and Communications and Electronic Command (CECOM). She went on to teach courses for US government officials and defense contractors on data-mining in counter-terrorism.
In her University of Texas article, she attaches the copy of an abstract of the US intelligence community’s MDDS program that had been presented to the “Annual Intelligence Community Symposium” in 1995. The abstract reveals that the primary sponsors of the MDDS programme were three agencies: the NSA, the CIA’s Office of Research & Development, and the intelligence community’s Community Management Staff (CMS) which operates under the Director of Central Intelligence. Administrators of the program, which provided funding of around 3–4 million dollars per year for 3–4 years, were identified as Hal Curran (NSA), Robert Kluttz (CMS), Dr. Claudia Pierce (NSA), Dr. Rick Steinheiser (ORD — standing for the CIA’s Office of Research and Devepment), and Dr. Thuraisingham herself.
Thuraisingham goes on in her article to reiterate that this joint CIA-NSA program partly funded Sergey Brin to develop the core of Google, through a grant to Stanford managed by Brin’s supervisor Prof. Jeffrey D. Ullman:
“In fact, the Google founder Mr. Sergey Brin was partly funded by this program while he was a PhD student at Stanford. He together with his advisor Prof. Jeffrey Ullman and my colleague at MITRE, Dr. Chris Clifton [Mitre’s chief scientist in IT], developed the Query Flocks System which produced solutions for mining large amounts of data stored in databases. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community and Mr. Brin would rush in on roller blades, give his presentation and rush out. In fact the last time we met in September 1998, Mr. Brin demonstrated to us his search engine which became Google soon after.”
Brin and Page officially incorporated Google as a company in September 1998, the very month they last reported to Thuraisingham and Steinheiser. ‘Query Flocks’ was also part of Google’s patented ‘PageRank’ search system, which Brin developed at Stanford under the CIA-NSA-MDDS programme, as well as with funding from the NSF, IBM and Hitachi. That year, MITRE’s Dr. Chris Clifton, who worked under Thuraisingham to develop the ‘Query Flocks’ system, co-authored a paper with Brin’s superviser, Prof. Ullman, and the CIA’s Rick Steinheiser. Titled ‘Knowledge Discovery in Text,’ the paper was presented at an academic conference.
“The MDDS funding that supported Brin was significant as far as seed-funding goes, but it was probably outweighed by the other funding streams,” said Thuraisingham. “The duration of Brin’s funding was around two years or so. In that period, I and my colleagues from the MDDS would visit Stanford to see Brin and monitor his progress every three months or so. We didn’t supervise exactly, but we did want to check progress, point out potential problems and suggest ideas. In those briefings, Brin did present to us on the query flocks research, and also demonstrated to us versions of the Google search engine.”
Brin thus reported to Thuraisingham and Steinheiser regularly about his work developing Google.
==
UPDATE 2.05PM GMT [2nd Feb 2015]:
Since publication of this article, Prof. Thuraisingham has amended her article referenced above. The amended version includes a new modified statement, followed by a copy of the original version of her account of the MDDS. In this amended version, Thuraisingham rejects the idea that CIA funded Google, and says instead:
“In fact Prof. Jeffrey Ullman (at Stanford) and my colleague at MITRE Dr. Chris Clifton together with some others developed the Query Flocks System, as part of MDDS, which produced solutions for mining large amounts of data stored in databases. Also, Mr. Sergey Brin, the cofounder of Google, was part of Prof. Ullman’s research group at that time. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community periodically and Mr. Brin would rush in on roller blades, give his presentation and rush out. During our last visit to Stanford in September 1998, Mr. Brin demonstrated to us his search engine which I believe became Google soon after…
There are also several inaccuracies in Dr. Ahmed’s article (dated January 22, 2015). For example, the MDDS program was not a ‘sensitive’ program as stated by Dr. Ahmed; it was an Unclassified program that funded universities in the US. Furthermore, Sergey Brin never reported to me or to Dr. Rick Steinheiser; he only gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s. Also, MDDS never funded Google; it funded Stanford University.”
Here, there is no substantive factual difference in Thuraisingham’s accounts, other than to assert that her statement associating Sergey Brin with the development of ‘query flocks’ is mistaken. Notably, this acknowledgement is derived not from her own knowledge, but from this very article quoting a comment from a Google spokesperson.
However, the bizarre attempt to disassociate Google from the MDDS program misses the mark. Firstly, the MDDS never funded Google, because during the development of the core components of the Google search engine, there was no company incorporated with that name. The grant was instead provided to Stanford University through Prof. Ullman, through whom some MDDS funding was used to support Brin who was co-developing Google at the time. Secondly, Thuraisingham then adds that Brin never “reported” to her or the CIA’s Steinheiser, but admits he “gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s.” It is unclear, though, what the distinction is here between reporting, and delivering a detailed presentation — either way, Thuraisingham confirms that she and the CIA had taken a keen interest in Brin’s development of Google. Thirdly, Thuraisingham describes the MDDS program as “unclassified,” but this does not contradict its “sensitive” nature. As someone who has worked for decades as an intelligence contractor and advisor, Thuraisingham is surely aware that there are many ways of categorizing intelligence, including ‘sensitive but unclassified.’ A number of former US intelligence officials I spoke to said that the almost total lack of public information on the CIA and NSA’s MDDS initiative suggests that although the progam was not classified, it is likely instead that its contents was considered sensitive, which would explain efforts to minimise transparency about the program and the way it fed back into developing tools for the US intelligence community. Fourthly, and finally, it is important to point out that the MDDS abstract which Thuraisingham includes in her University of Texas document states clearly not only that the Director of Central Intelligence’s CMS, CIA and NSA were the overseers of the MDDS initiative, but that the intended customers of the project were “DoD, IC, and other government organizations”: the Pentagon, the US intelligence community, and other relevant US government agencies.
In other words, the provision of MDDS funding to Brin through Ullman, under the oversight of Thuraisingham and Steinheiser, was fundamentally because they recognized the potential utility of Brin’s work developing Google to the Pentagon, intelligence community, and the federal government at large.
==
The MDDS programme is actually referenced in several papers co-authored by Brin and Page while at Stanford, specifically highlighting its role in financially sponsoring Brin in the development of Google. In their 1998 paper published in the Bulletin of the IEEE Computer Society Technical Committeee on Data Engineering, they describe the automation of methods to extract information from the web via “Dual Iterative Pattern Relation Extraction,” the development of “a global ranking of Web pages called PageRank,” and the use of PageRank “to develop a novel search engine called Google.” Through an opening footnote, Sergey Brin confirms he was “Partially supported by the Community Management Staff’s Massive Digital Data Systems Program, NSF grant IRI-96–31952” — confirming that Brin’s work developing Google was indeed partly-funded by the CIA-NSA-MDDS program.
This NSF grant identified alongside the MDDS, whose project report lists Brin among the students supported (without mentioning the MDDS), was different to the NSF grant to Larry Page that included funding from DARPA and NASA. The project report, authored by Brin’s supervisor Prof. Ullman, goes on to say under the section ‘Indications of Success’ that “there are some new stories of startups based on NSF-supported research.” Under ‘Project Impact,’ the report remarks: “Finally, the google project has also gone commercial as Google.com.”
Thuraisingham’s account, including her new amended version, therefore demonstrates that the CIA-NSA-MDDS program was not only partly funding Brin throughout his work with Larry Page developing Google, but that senior US intelligence representatives including a CIA official oversaw the evolution of Google in this pre-launch phase, all the way until the company was ready to be officially founded. Google, then, had been enabled with a “significant” amount of seed-funding and oversight from the Pentagon: namely, the CIA, NSA, and DARPA.
The DoD could not be reached for comment.
When I asked Prof. Ullman to confirm whether or not Brin was partly funded under the intelligence community’s MDDS program, and whether Ullman was aware that Brin was regularly briefing the CIA’s Rick Steinheiser on his progress in developing the Google search engine, Ullman’s responses were evasive: “May I know whom you represent and why you are interested in these issues? Who are your ‘sources’?” He also denied that Brin played a significant role in developing the ‘query flocks’ system, although it is clear from Brin’s papers that he did draw on that work in co-developing the PageRank system with Page.
When I asked Ullman whether he was denying the US intelligence community’s role in supporting Brin during the development of Google, he said: “I am not going to dignify this nonsense with a denial. If you won’t explain what your theory is, and what point you are trying to make, I am not going to help you in the slightest.”
The MDDS abstract published online at the University of Texas confirms that the rationale for the CIA-NSA project was to “provide seed money to develop data management technologies which are of high-risk and high-pay-off,” including techniques for “querying, browsing, and filtering; transaction processing; accesses methods and indexing; metadata management and data modelling; and integrating heterogeneous databases; as well as developing appropriate architectures.” The ultimate vision of the program was to “provide for the seamless access and fusion of massive amounts of data, information and knowledge in a heterogeneous, real-time environment” for use by the Pentagon, intelligence community and potentially across government.
These revelations corroborate the claims of Robert Steele, former senior CIA officer and a founding civilian deputy director of the Marine Corps Intelligence Activity, whom I interviewed for The Guardian last year on open source intelligence. Citing sources at the CIA, Steele had said in 2006 that Steinheiser, an old colleague of his, was the CIA’s main liaison at Google and had arranged early funding for the pioneering IT firm. At the time, Wired founder John Batelle managed to get this official denial from a Google spokesperson in response to Steele’s assertions:
“The statements related to Google are completely untrue.”
This time round, despite multiple requests and conversations, a Google spokesperson declined to comment.
UPDATE: As of 5.41PM GMT [22nd Jan 2015], Google’s director of corporate communication got in touch and asked me to include the following statement:
“Sergey Brin was not part of the Query Flocks Program at Stanford, nor were any of his projects funded by US Intelligence bodies.”
This is what I wrote back:
My response to that statement would be as follows: Brin himself in his own paper acknowledges funding from the Community Management Staff of the Massive Digital Data Systems (MDDS) initiative, which was supplied through the NSF. The MDDS was an intelligence community program set up by the CIA and NSA. I also have it on record, as noted in the piece, from Prof. Thuraisingham of University of Texas that she managed the MDDS program on behalf of the US intelligence community, and that her and the CIA’s Rick Steinheiser met Brin every three months or so for two years to be briefed on his progress developing Google and PageRank. Whether Brin worked on query flocks or not is neither here nor there.
In that context, you might want to consider the following questions:
1) Does Google deny that Brin’s work was part-funded by the MDDS via an NSF grant?
2) Does Google deny that Brin reported regularly to Thuraisingham and Steinheiser from around 1996 to 1998 until September that year when he presented the Google search engine to them?
LESS KNOWN FACT: AROUND THE SAME YEAR 2004, SERGEY BRIN JOINED WORLD ECONOMIC FORUM’S YOUTH ORGANIZATION, THE “YOUNG GLOBAL LEADERS”
Total Information Awareness
A call for papers for the MDDS was sent out via email list on November 3rd 1993 from senior US intelligence official David Charvonia, director of the research and development coordination office of the intelligence community’s CMS. The reaction from Tatu Ylonen (celebrated inventor of the widely used secure shell [SSH] data protection protocol) to his colleagues on the email list is telling: “Crypto relevance? Makes you think whether you should protect your data.” The email also confirms that defense contractor and Highlands Forum partner, SAIC, was managing the MDDS submission process, with abstracts to be sent to Jackie Booth of the CIA’s Office of Research and Development via a SAIC email address.
By 1997, Thuraisingham reveals, shortly before Google became incorporated and while she was still overseeing the development of its search engine software at Stanford, her thoughts turned to the national security applications of the MDDS program. In the acknowledgements to her book, Web Data Mining and Applications in Business Intelligence and Counter-Terrorism (2003), Thuraisingham writes that she and “Dr. Rick Steinheiser of the CIA, began discussions with Defense Advanced Research Projects Agency on applying data-mining for counter-terrorism,” an idea that resulted directly from the MDDS program which partly funded Google. “These discussions eventually developed into the current EELD (Evidence Extraction and Link Detection) program at DARPA.”
So the very same senior CIA official and CIA-NSA contractor involved in providing the seed-funding for Google were simultaneously contemplating the role of data-mining for counter-terrorism purposes, and were developing ideas for tools actually advanced by DARPA.
Today, as illustrated by her recent oped in the New York Times, Thuraisingham remains a staunch advocate of data-mining for counter-terrorism purposes, but also insists that these methods must be developed by government in cooperation with civil liberties lawyers and privacy advocates to ensure that robust procedures are in place to prevent potential abuse. She points out, damningly, that with the quantity of information being collected, there is a high risk of false positives.
In 1993, when the MDDS program was launched and managed by MITRE Corp. on behalf of the US intelligence community, University of Virginia computer scientist Dr. Anita K. Jones — a MITRE trustee — landed the job of DARPA director and head of research and engineering across the Pentagon. She had been on the board of MITRE since 1988. From 1987 to 1993, Jones simultaneously served on SAIC’s board of directors. As the new head of DARPA from 1993 to 1997, she also co-chaired the Pentagon’s Highlands Forum during the period of Google’s pre-launch development at Stanford under the MDSS.
Thus, when Thuraisingham and Steinheiser were talking to DARPA about the counter-terrorism applications of MDDS research, Jones was DARPA director and Highlands Forum co-chair. That year, Jones left DARPA to return to her post at the University of Virgina. The following year, she joined the board of the National Science Foundation, which of course had also just funded Brin and Page, and also returned to the board of SAIC. When she left DoD, Senator Chuck Robb paid Jones the following tribute : “She brought the technology and operational military communities together to design detailed plans to sustain US dominance on the battlefield into the next century.”
Dr. Anita Jones, head of DARPA from 1993–1997, and co-chair of the Pentagon Highlands Forum from 1995–1997, during which officials in charge of the CIA-NSA-MDSS program were funding Google, and in communication with DARPA about data-mining for counterterrorism
On the board of the National Science Foundation from 1992 to 1998 (including a stint as chairman from 1996) was Richard N. Zare. This was the period in which the NSF sponsored Sergey Brin and Larry Page in association with DARPA. In June 1994, Prof. Zare, a chemist at Stanford, participated with Prof. Jeffrey Ullman (who supervised Sergey Brin’s research), on a panel sponsored by Stanford and the National Research Council discussing the need for scientists to show how their work “ties to national needs.” The panel brought together scientists and policymakers, including “Washington insiders.”
DARPA’s EELD program, inspired by the work of Thuraisingham and Steinheiser under Jones’ watch, was rapidly adapted and integrated with a suite of tools to conduct comprehensive surveillance under the Bush administration.
According to DARPA official Ted Senator, who led the EELD program for the agency’s short-lived Information Awareness Office, EELD was among a range of “promising techniques” being prepared for integration “into the prototype TIA system.” TIA stood for Total Information Awareness, and was the main global electronic eavesdropping and data-mining program deployed by the Bush administration after 9/11. TIA had been set up by Iran-Contra conspirator Admiral John Poindexter, who was appointed in 2002 by Bush to lead DARPA’s new Information Awareness Office.
The Xerox Palo Alto Research Center (PARC) was another contractor among 26 companies (also including SAIC) that received million dollar contracts from DARPA (the specific quantities remained classified) under Poindexter, to push forward the TIA surveillance program in 2002 onwards. The research included “behaviour-based profiling,” “automated detection, identification and tracking” of terrorist activity, among other data-analyzing projects. At this time, PARC’s director and chief scientist was John Seely Brown. Both Brown and Poindexter were Pentagon Highlands Forum participants — Brown on a regular basis until recently.
TIA was purportedly shut down in 2003 due to public opposition after the program was exposed in the media, but the following year Poindexter participated in a Pentagon Highlands Group session in Singapore, alongside defense and security officials from around the world. Meanwhile, Ted Senator continued to manage the EELD program among other data-mining and analysis projects at DARPA until 2006, when he left to become a vice president at SAIC. He is now a SAIC/Leidos technical fellow.
Google, DARPA and the money trail
Long before the appearance of Sergey Brin and Larry Page, Stanford University’s computer science department had a close working relationship with US military intelligence. A letter dated November 5th 1984 from the office of renowned artificial intelligence (AI) expert, Prof Edward Feigenbaum, addressed to Rick Steinheiser, gives the latter directions to Stanford’s Heuristic Programming Project, addressing Steinheiser as a member of the “AI Steering Committee.” A list of attendees at a contractor conference around that time, sponsored by the Pentagon’s Office of Naval Research (ONR), includes Steinheiser as a delegate under the designation “OPNAV Op-115” — which refers to the Office of the Chief of Naval Operations’ program on operational readiness, which played a major role in advancing digital systems for the military.
From the 1970s, Prof. Feigenbaum and his colleagues had been running Stanford’s Heuristic Programming Project under contract with DARPA, continuing through to the 1990s. Feigenbaum alone had received around over $7 million in this period for his work from DARPA, along with other funding from the NSF, NASA, and ONR.
Brin’s supervisor at Stanford, Prof. Jeffrey Ullman, was in 1996 part of a joint funding project of DARPA’s Intelligent Integration of Information program. That year, Ullman co-chaired DARPA-sponsored meetings on data exchange between multiple systems.
In September 1998, the same month that Sergey Brin briefed US intelligence representatives Steinheiser and Thuraisingham, tech entrepreneurs Andreas Bechtolsheim and David Cheriton invested $100,000 each in Google. Both investors were connected to DARPA.
As a Stanford PhD student in electrical engineering in the 1980s, Bechtolsheim’s pioneering SUN workstation project had been funded by DARPA and the Stanford computer science department — this research was the foundation of Bechtolsheim’s establishment of Sun Microsystems, which he co-founded with William Joy.
As for Bechtolsheim’s co-investor in Google, David Cheriton, the latter is a long-time Stanford computer science professor who has an even more entrenched relationship with DARPA. His bio at the University of Alberta, which in November 2014 awarded him an honorary science doctorate, says that Cheriton’s “research has received the support of the US Defense Advanced Research Projects Agency (DARPA) for over 20 years.”
In the meantime, Bechtolsheim left Sun Microsystems in 1995, co-founding Granite Systems with his fellow Google investor Cheriton as a partner. They sold Granite to Cisco Systems in 1996, retaining significant ownership of Granite, and becoming senior Cisco executives.
An email obtained from the Enron Corpus (a database of 600,000 emails acquired by the Federal Energy Regulatory Commission and later released to the public) from Richard O’Neill, inviting Enron executives to participate in the Highlands Forum, shows that Cisco and Granite executives are intimately connected to the Pentagon. The email reveals that in May 2000, Bechtolsheim’s partner and Sun Microsystems co-founder, William Joy — who was then chief scientist and corporate executive officer there — had attended the Forum to discuss nanotechnology and molecular computing.
In 1999, Joy had also co-chaired the President’s Information Technology Advisory Committee, overseeing a report acknowledging that DARPA had:
“… revised its priorities in the 90’s so that all information technology funding was judged in terms of its benefit to the warfighter.”
Throughout the 1990s, then, DARPA’s funding to Stanford, including Google, was explicitly about developing technologies that could augment the Pentagon’s military intelligence operations in war theatres.
The Joy report recommended more federal government funding from the Pentagon, NASA, and other agencies to the IT sector. Greg Papadopoulos, another of Bechtolsheim’s colleagues as then Sun Microsystems chief technology officer, also attended a Pentagon Highlands’ Forum meeting in September 2000.
In November, the Pentagon Highlands Forum hosted Sue Bostrom, who was vice president for the internet at Cisco, sitting on the company’s board alongside Google co-investors Bechtolsheim and Cheriton. The Forum also hosted Lawrence Zuriff, then a managing partner of Granite, which Bechtolsheim and Cheriton had sold to Cisco. Zuriff had previously been an SAIC contractor from 1993 to 1994, working with the Pentagon on national security issues, specifically for Marshall’s Office of Net Assessment. In 1994, both the SAIC and the ONA were, of course, involved in co-establishing the Pentagon Highlands Forum. Among Zuriff’s output during his SAIC tenure was a paper titled ‘Understanding Information War’, delivered at a SAIC-sponsored US Army Roundtable on the Revolution in Military Affairs.
After Google’s incorporation, the company received $25 million in equity funding in 1999 led by Sequoia Capital and Kleiner Perkins Caufield & Byers. According to Homeland Security Today, “A number of Sequoia-bankrolled start-ups have contracted with the Department of Defense, especially after 9/11 when Sequoia’s Mark Kvamme met with Defense Secretary Donald Rumsfeld to discuss the application of emerging technologies to warfighting and intelligence collection.” Similarly, Kleiner Perkins had developed “a close relationship” with In-Q-Tel, the CIA venture capitalist firm that funds start-ups “to advance ‘priority’ technologies of value” to the intelligence community.
John Doerr, who led the Kleiner Perkins investment in Google obtaining a board position, was a major early investor in Becholshtein’s Sun Microsystems at its launch. He and his wife Anne are the main funders behind Rice University’s Center for Engineering Leadership (RCEL), which in 2009 received $16 million from DARPA for its platform-aware-compilation-environment (PACE) ubiquitous computing R&D program. Doerr also has a close relationship with the Obama administration, which he advised shortly after it took power to ramp up Pentagon funding to the tech industry. In 2013, at the Fortune Brainstorm TECH conference, Doerr applauded “how the DoD’s DARPA funded GPS, CAD, most of the major computer science departments, and of course, the Internet.”
From inception, in other words, Google was incubated, nurtured and financed by interests that were directly affiliated or closely aligned with the US military intelligence community: many of whom were embedded in the Pentagon Highlands Forum.
Google captures the Pentagon
In 2003, Google began customizing its search engine under special contract with the CIA for its Intelink Management Office, “overseeing top-secret, secret and sensitive but unclassified intranets for CIA and other IC agencies,” according to Homeland Security Today. That year, CIA funding was also being “quietly” funneled through the National Science Foundation to projects that might help create “new capabilities to combat terrorism through advanced technology.”
The following year, Google bought the firm Keyhole, which had originally been funded by In-Q-Tel. Using Keyhole, Google began developing the advanced satellite mapping software behind Google Earth. Former DARPA director and Highlands Forum co-chair Anita Jones had been on the board of In-Q-Tel at this time, and remains so today.
Then in November 2005, In-Q-Tel issued notices to sell $2.2 million of Google stocks. Google’s relationship with US intelligence was further brought to light when an IT contractor told a closed Washington DC conference of intelligence professionals on a not-for-attribution basis that at least one US intelligence agency was working to “leverage Google’s [user] data monitoring” capability as part of an effort to acquire data of “national security intelligence interest.”
A photo on Flickr dated March 2007 reveals that Google research director and AI expert Peter Norvig attended a Pentagon Highlands Forum meeting that year in Carmel, California. Norvig’s intimate connection to the Forum as of that year is also corroborated by his role in guest editing the 2007 Forum reading list.
The photo below shows Norvig in conversation with Lewis Shepherd, who at that time was senior technology officer at the Defense Intelligence Agency, responsible for investigating, approving, and architecting “all new hardware/software systems and acquisitions for the Global Defense Intelligence IT Enterprise,” including “big data technologies.” Shepherd now works at Microsoft. Norvig was a computer research scientist at Stanford University in 1991 before joining Bechtolsheim’s Sun Microsystems as senior scientist until 1994, and going on to head up NASA’s computer science division.
Lewis Shepherd (left), then a senior technology officer at the Pentagon’s Defense Intelligence Agency, talking to Peter Norvig (right), renowned expert in artificial intelligence expert and director of research at Google. This photo is from a Highlands Forum meeting in 2007.
Norvig shows up on O’Neill’s Google Plus profile as one of his close connections. Scoping the rest of O’Neill’s Google Plus connections illustrates that he is directly connected not just to a wide range of Google executives, but also to some of the biggest names in the US tech community.
Those connections include Michele Weslander Quaid, an ex-CIA contractor and former senior Pentagon intelligence official who is now Google’s chief technology officer where she is developing programs to “best fit government agencies’ needs”; Elizabeth Churchill, Google director of user experience; James Kuffner, a humanoid robotics expert who now heads up Google’s robotics division and who introduced the term ‘cloud robotics’; Mark Drapeau, director of innovation engagement for Microsoft’s public sector business; Lili Cheng, general manager of Microsoft’s Future Social Experiences (FUSE) Labs; Jon Udell, Microsoft ‘evangelist’; Cory Ondrejka, vice president of engineering at Facebook; to name just a few.
In 2010, Google signed a multi-billion dollar no-bid contract with the NSA’s sister agency, the National Geospatial-Intelligence Agency (NGA). The contract was to use Google Earth for visualization services for the NGA. Google had developed the software behind Google Earth by purchasing Keyhole from the CIA venture firm In-Q-Tel.
Then a year after, in 2011, another of O’Neill’s Google Plus connections, Michele Quaid — who had served in executive positions at the NGA, National Reconnaissance Office and the Office of the Director of National Intelligence — left her government role to become Google ‘innovation evangelist’ and the point-person for seeking government contracts. Quaid’s last role before her move to Google was as a senior representative of the Director of National Intelligence to the Intelligence, Surveillance, and Reconnaissance Task Force, and a senior advisor to the undersecretary of defense for intelligence’s director of Joint and Coalition Warfighter Support (J&CWS). Both roles involved information operations at their core. Before her Google move, in other words, Quaid worked closely with the Office of the Undersecretary of Defense for Intelligence, to which the Pentagon’s Highlands Forum is subordinate. Quaid has herself attended the Forum, though precisely when and how often I could not confirm.
In March 2012, then DARPA director Regina Dugan — who in that capacity was also co-chair of the Pentagon Highlands Forum — followed her colleague Quaid into Google to lead the company’s new Advanced Technology and Projects Group. During her Pentagon tenure, Dugan led on strategic cyber security and social media, among other initiatives. She was responsible for focusing “an increasing portion” of DARPA’s work “on the investigation of offensive capabilities to address military-specific needs,” securing $500 million of government funding for DARPA cyber research from 2012 to 2017.
Regina Dugan, former head of DARPA and Highlands Forum co-chair, now a senior Google executive — trying her best to look the part
By November 2014, Google’s chief AI and robotics expert James Kuffner was a delegate alongside O’Neill at the Highlands Island Forum 2014 in Singapore, to explore ‘Advancement in Robotics and Artificial Intelligence: Implications for Society, Security and Conflict.’ The event included 26 delegates from Austria, Israel, Japan, Singapore, Sweden, Britain and the US, from both industry and government. Kuffner’s association with the Pentagon, however, began much earlier. In 1997, Kuffner was a researcher during his Stanford PhD for a Pentagon-funded project on networked autonomous mobile robots, sponsored by DARPA and the US Navy.
Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is also a columnist for Middle East Eye. He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work.
Nafeez has also written for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, Counterpunch, Truthout, among others. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel ZERO POINT, among other books. His work on the root causes and covert operations linked to international terrorism officially contributed to the 9/11 Commission and the 7/7 Coroner’s Inquest.
A rich history of the government’s science funding
There was already a long history of collaboration between America’s best scientists and the intelligence community, from the creation of the atomic bomb and satellite technology to efforts to put a man on the moon.The internet itself was created because of an intelligence effort.
In fact, the internet itself was created because of an intelligence effort: In the 1970s, the agency responsible for developing emerging technologies for military, intelligence, and national security purposes—the Defense Advanced Research Projects Agency (DARPA)—linked four supercomputers to handle massive data transfers. It handed the operations off to the National Science Foundation (NSF) a decade or so later, which proliferated the network across thousands of universities and, eventually, the public, thus creating the architecture and scaffolding of the World Wide Web.
Silicon Valley was no different. By the mid 1990s, the intelligence community was seeding funding to the most promising supercomputing efforts across academia, guiding the creation of efforts to make massive amounts of information useful for both the private sector as well as the intelligence community.
They funded these computer scientists through an unclassified, highly compartmentalized program that was managed for the CIA and the NSA by large military and intelligence contractors. It was called the Massive Digital Data Systems (MDDS) project.
The Massive Digital Data Systems (MDDS) project
MDDS was introduced to several dozen leading computer scientists at Stanford, CalTech, MIT, Carnegie Mellon, Harvard, and others in a white paper that described what the CIA, NSA, DARPA, and other agencies hoped to achieve. The research would largely be funded and managed by unclassified science agencies like NSF, which would allow the architecture to be scaled up in the private sector if it managed to achieve what the intelligence community hoped for.
“Not only are activities becoming more complex, but changing demands require that the IC [Intelligence Community] process different types as well as larger volumes of data,” the intelligence community said in its 1993 MDDS white paper. “Consequently, the IC is taking a proactive role in stimulating research in the efficient management of massive databases and ensuring that IC requirements can be incorporated or adapted into commercial products. Because the challenges are not unique to any one agency, the Community Management Staff (CMS) has commissioned a Massive Digital Data Systems [MDDS] Working Group to address the needs and to identify and evaluate possible solutions.”
Over the next few years, the program’s stated aim was to provide more than a dozen grants of several million dollars each to advance this research concept. The grants were to be directed largely through the NSF so that the most promising, successful efforts could be captured as intellectual property and form the basis of companies attracting investments from Silicon Valley. This type of public-to-private innovation system helped launch powerful science and technology companies like Qualcomm, Symantec, Netscape, and others, and funded the pivotal research in areas like Doppler radar and fiber optics, which are central to large companies like AccuWeather, Verizon, and AT&T today. Today, the NSF provides nearly 90% of all federal funding for university-based computer-science research.
MIT is but a Pentagon lab
The CIA and NSA’s end goal
The research arms of the CIA and NSA hoped that the best computer-science minds in academia could identify what they called “birds of a feather:” Just as geese fly together in large V shapes, or flocks of sparrows make sudden movements together in harmony, they predicted that like-minded groups of humans would move together online. The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing, and the “Birds of a Feather Session on the Intelligence Community Initiative in Massive Digital Data Systems” took place at the Fairmont Hotel in San Jose in the spring of 1995.The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing.
Their research aim was to track digital fingerprints inside the rapidly expanding global information network, which was then known as the World Wide Web. Could an entire world of digital information be organized so that the requests humans made inside such a network be tracked and sorted? Could their queries be linked and ranked in order of importance? Could “birds of a feather” be identified inside this sea of information so that communities and groups could be tracked in an organized way?
By working with emerging commercial-data companies, their intent was to track like-minded groups of people across the internet and identify them from the digital fingerprints they left behind, much like forensic scientists use fingerprint smudges to identify criminals. Just as “birds of a feather flock together,” they predicted that potential terrorists would communicate with each other in this new global, connected world—and they could find them by identifying patterns in this massive amount of new information. Once these groups were identified, they could then follow their digital trails everywhere.
Sergey Brin and Larry Page, computer-science boy wonders
In 1995, one of the first and most promising MDDS grants went to a computer-science research team at Stanford University with a decade-long history of working with NSF and DARPA grants. The primary objective of this grant was “query optimization of very complex queries that are described using the ‘query flocks’ approach.” A second grant—the DARPA-NSF grant most closely associated with Google’s origin—was part of a coordinated effort to build a massive digital library using the internet as its backbone. Both grants funded research by two graduate students who were making rapid advances in web-page ranking, as well as tracking (and making sense of) user queries: future Google cofounders Sergey Brin and Larry Page.
The research by Brin and Page under these grants became the heart of Google: people using search functions to find precisely what they wanted inside a very large data set. The intelligence community, however, saw a slightly different benefit in their research: Could the network be organized so efficiently that individual users could be uniquely identified and tracked?
This process is perfectly suited for the purposes of counter-terrorism and homeland security efforts: Human beings and like-minded groups who might pose a threat to national security can be uniquely identified online before they do harm. This explains why the intelligence community found Brin’s and Page’s research efforts so appealing; prior to this time, the CIA largely used human intelligence efforts in the field to identify people and groups that might pose threats. The ability to track them virtually (in conjunction with efforts in the field) would change everything.
It was the beginning of what in just a few years’ time would become Google. The two intelligence-community managers charged with leading the program met regularly with Brin as his research progressed, and he was an author on several other research papers that resulted from this MDDS grant before he and Page left to form Google.
The grants allowed Brin and Page to do their work and contributed to their breakthroughs in web-page ranking and tracking user queries. Brin didn’t work for the intelligence community—or for anyone else. Google had not yet been incorporated. He was just a Stanford researcher taking advantage of the grant provided by the NSA and CIA through the unclassified MDDS program.
Left out of Google’s story
The MDDS research effort has never been part of Google’s origin story, even though the principal investigator for the MDDS grant specifically named Google as directly resulting from their research: “Its core technology, which allows it to find pages far more accurately than other search engines, was partially supported by this grant,” he wrote. In a published research paper that includes some of Brin’s pivotal work, the authors also reference the NSF grant that was created by the MDDS program.
Instead, every Google creation story only mentions just one federal grant: the NSF/DARPA “digital libraries” grant, which was designed to allow Stanford researchers to search the entire World Wide Web stored on the university’s servers at the time. “The development of the Google algorithms was carried on a variety of computers, mainly provided by the NSF-DARPA-NASA-funded Digital Library project at Stanford,” Stanford’s Infolab says of its origin, for example. NSF likewise only references the digital libraries grant, not the MDDS grant as well, in its own history of Google’s origin. In the famous research paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” which describes the creation of Google, Brin and Page thanked the NSF and DARPA for its digital library grant to Stanford. But the grant from the intelligence community’s MDDS program—specifically designed for the breakthrough that Google was built upon—has faded into obscurity.
Google has said in the past that it was not funded or created by the CIA. For instance, when stories circulated in 2006 that Google had received funding from the intelligence community for years to assist in counter-terrorism efforts, the company told Wired magazine founder John Battelle, “The statements related to Google are completely untrue.”
Did the CIA directly fund the work of Brin and Page, and therefore create Google? No. But were Brin and Page researching precisely what the NSA, the CIA, and the intelligence community hoped for, assisted by their grants? Absolutely.The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur something that looks almost exactly like Google.
To understand this significance, you have to consider what the intelligence community was trying to achieve as it seeded grants to the best computer-science minds in academia: The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur the development of something that looks almost exactly like Google. Brin’s breakthrough research on page ranking by tracking user queries and linking them to the many searches conducted—essentially identifying “birds of a feather”—was largely the aim of the intelligence community’s MDDS program. And Google succeeded beyond their wildest dreams.
The intelligence community’s enduring legacy within Silicon Valley
Digital privacy concerns over the intersection between the intelligence community and commercial technology giants have grown in recent years. But most people still don’t understand the degree to which the intelligence community relies on the world’s biggest science and tech companies for its counter-terrorism and national-security work.
Civil-liberty advocacy groups have aired their privacy concerns for years, especially as they now relate to the Patriot Act. “Hastily passed 45 days after 9/11 in the name of national security, the Patriot Act was the first of many changes to surveillance laws that made it easier for the government to spy on ordinary Americans by expanding the authority to monitor phone and email communications, collect bank and credit reporting records, and track the activity of innocent Americans on the Internet,” says the ACLU. “While most Americans think it was created to catch terrorists, the Patriot Act actually turns regular citizens into suspects.”
When asked, the biggest technology and communications companies—from Verizon and AT&T to Google, Facebook, and Microsoft—say that they never deliberately and proactively offer up their vast databases on their customers to federal security and law enforcement agencies: They say that they only respond to subpoenas or requests that are filed properly under the terms of the Patriot Act.
But even a cursory glance through recent public records shows that there is a treadmill of constant requests that could undermine the intent behind this privacy promise. According to the data-request records that the companies make available to the public, in the most recent reporting period between 2016 and 2017, local, state and federal government authorities seeking information related to national security, counter-terrorism or criminal concerns issued more than 260,000 subpoenas, court orders, warrants, and other legal requests to Verizon, more than 250,000 such requests to AT&T, and nearly 24,000 subpoenas, search warrants, or court orders to Google. Direct national security or counter-terrorism requests are a small fraction of this overall group of requests, but the Patriot Act legal process has now become so routinized that the companies each have a group of employees who simply take care of the stream of requests.
In this way, the collaboration between the intelligence community and big, commercial science and tech companies has been wildly successful. When national security agencies need to identify and track people and groups, they know where to turn – and do so frequently. That was the goal in the beginning. It has succeeded perhaps more than anyone could have imagined at the time.
Sebastian Thrun was entertaining the idea of self-driving cars for many years. Born and raised in Germany, he was fascinated with the power and performance of German cars. Things changed in 1986, when he was 18, when his best friend died in a car crash because the driver, another friend, was going too fast on his new Audi Quattro.
As a student at the University of Bonn, Thrun developed several autonomous robotic systems that earned him international recognition. At the time, Thrun was convinced that self-driving cars would soon make transportation safer, avoiding crashes like the one that took his friend’s life.
In 1998, he became an assistant professor and co-director of the Robot Learning Laboratory at Carnegie Mellon University. In July 2003, Thrun left Carnegie Mellon for Stanford University, soon after the first DARPA Grand Challenge was announced. Before accepting the new position, he asked Red Whittaker, the leader of the CMU robotics department, to join the team developing the vehicle for the DARPA race. Whittaker declined. After moving to California, Thrun joined the Stanford Racing Team.
On Oct. 8, 2005, the Stanford Racing Team won $2 million for being the first team to complete the 132-mile DARPA Grand Challenge course in California’s Mojave Desert. Their robot car, “Stanley,” finished in just under 6 hours and 54 minutes and averaged over 19 mph on the course.
Google’s Page wanted to develop self-driving cars
Two years after the third Grand Challenge, Google co-founder Larry Page called Thrun, wanting to turn the experience of the DARPA races into a product for the masses.
When Page first approached Thrun about building a self-driving car that people could use on the real roads, Thrun told him it couldn’t be done.
But Page had a vision, and he would not abandon his quest for an autonomous vehicle.
Thrun recalled that a short time later, Page came back to him and said, “OK, you say it can’t be done. You’re the expert. I trust you. So I can explain to Sergey [Brin] why it can’t be done, can you give me a technical reason why it can’t be done?”
Finally, Thrun accepted Page’s offer and, in 2009, started Project Chauffeur, which began as the Google self-driving car project.
The Google 101,000-Mile Challenge
To develop the technology for Google’s self-driving car, Thrun called Urmson and offered him the position of chief technical officer of the project.
To encourage the team to build a vehicle, and its systems, to drive on any public road, Page created two challenges, with big cash rewards for the entire team: a 1,000-mile challenge to show that Project Chauffeur’s car could drive in several situations, including highways and the streets of San Francisco, and another 100,000-mile challenge to show that driverless cars could be a reality in a few years.
By the middle of 2011, Project Chauffeur engineers completed the two challenges.
In 2016, the Google self-driving car project became Waymo, a “spinoff under Alphabet as a self-driving technology company with a mission to make it safe and easy for people and things to move around.”
Urmson led Google’s self-driving car project for nearly eight years. Under his leadership, Google vehicles accumulated 1.8 million miles of test driving.
In 2018, Waymo One, the first fully self-driving vehicle taxi service, began in Phoenix, Arizona.
From Waymo to Aurora
In 2016, after finishing development of the production-ready version of Waymo’s self-driving technology, Urmson left Google to start Aurora Innovation, a startup backed by Amazon, aiming to provide the full-stack solution for self-driving vehicles.
Urmson believes that in 20 years, we’ll see much of the transportation infrastructure move over to automation. – Arrow.com
TO BE CONTINUED
Here’s a peek into the next episode:
Facebook Hired a Former DARPA Head To Lead An Ambitious New Research Lab
If you need another sign that Facebook’s world-dominating ambitions are just getting started, here’s one: the Menlo Park, Calif. company has hired a former DARPA chief to lead its new research lab.
Facebook CEO Mark Zuckerberg announced April 14 that Regina Dugan will guide Building 8, a new research group developing hardware projects that advance the company’s efforts in virtual reality, augmented reality, artificial intelligence and global connectivity.
Dugan served as the head of the Pentagon’s Defense Advanced Research Projects Agency from 2009 and 2012. Most recently, she led Google’s Advanced Technology and Projects Lab, a highly experimental arm of the company responsible for developing new hardware and software products on a strict two-year timetable.
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
Valve, the company behind Half Life and Counter-Strike, has just announced that the video games giant is ushering humanity into a Brave New World. How so? By merely including new technologies called brain-computer interfaces in its games. Please read below an great brief report from The Organic Prepper, followed by a few of my own comments:
Brain-Computer Interfaces: Don’t Worry, It’s Just a “Game”
by Robert Wheeler
BCIs will work on our feelings by adjusting the game accordingly
The head of Valve, Gabe Newell, has stated that the future of video games will involve “Brain-computer interfaces.” Newell added that BCIs would soon create superior experiences to those we currently perceive through our eyes and ears.
Newell said he envisions the gaming devices detecting a gamer’s emotions and then adjusting the settings to modify the player’s mood. For example, increasing the difficulty level when the player is getting bored.
Valve is currently developing its own BCIs and working on “modified VR head straps” that developers can use to experiment with signals coming from the brain. “If you’re a software developer in 2022 who doesn’t have one of these in your test lab, you’re making a silly mistake,” Newell said.
VR headsets will collect data by reading our brain signals
Valve is working with OpenBCI headsets. OpenBCI unveiled a headset design back in November that it calls Galea. It is designed to work alongside VR headsets like Valve’s Index.
“We’re working on an open-source project so that everybody can have high-resolution [brain signal] read technologies built into headsets, in a bunch of different modalities,” Newell added.
“Software developers for interactive experience[s] — you’ll be absolutely using one of these modified VR head straps to be doing that routinely — simply because there’s too much useful data,” said Newell.
The data collected by the head straps would consist of readings from the players’ brains and bodies. The data would essentially tell if the player is excited, surprised, bored, sad, afraid, or amused and other emotions. The modified head strap will then use the information to improve “immersion and personalize what happens during games.”
The world will seem flat and colorless in comparison to the one created in your mind
Newell also discussed taking the brain-reading technology a step further and creating a situation to send signals to people’s minds. (Such as changing their feelings and delivering better visuals during games.)
“You’re used to experiencing the world through eyes,” Newell said, “but eyes were created by this low-cost bidder that didn’t care about failure rates and RMAs, and if it got broken, there was no way to repair anything effectively, which totally makes sense from an evolutionary perspective, but is not at all reflective of consumer preferences.”
“So the visual experience, the visual fidelity we’ll be able to create — the real world will stop being the metric that we apply to the best possible visual fidelity.
“Where it gets weird is when who you are becomes editable through a BCI.” ~ Gabe Newell
Typically your average human accepts their feelings to be how they truly feel. Newell claims that BCIs will allow for an edit of these feelings digitally.
“One of the early applications I expect we’ll see is improved sleep — sleep will become an app that you run where you say, ‘Oh, I need this much sleep, I need this much REM,’” he said.
Newell also claims that another benefit could be the reduction or complete removal of unwanted feelings or brain conditions.
Doesn’t something good come from this technology?
Newell and Valve are working on something beyond merely the improvement of the video game experience. There is now a significant bleed over in the research conducted by Newell’s team and the prosthetics and neuroscience industries.
Valve is trading research for expertise, contributing to projects developing synthetic body parts.
“This is what we’re contributing to this particular research project,” he said, “and because of that, we get access to leaders in the neuroscience field who teach us a lot about the neuroscience side.”
Are we equipped to experience things we have never experienced?
Newell briefly mentioned some potential negatives to the technology. For example, he said how BCIs could cause people to experience physical pain, even pain beyond their physical body.
“You could make people think they [are] hurt by injuring their tool, which is a complicated topic in and of itself,” he said.
Game developers might harness that function to make a player feel the pain of the character they are playing as when they are injured — perhaps to a lesser degree.
Like any other form of technology, Newell says there’s a degree of trust in using it and that not everyone will feel comfortable with connecting their brain to a computer.
He says no one will be forced to do anything they don’t want to do, and that people will likely follow others if they have good experiences, likening BCI technology to cellular phones.
“People are going to decide for themselves if they want to do it. Nobody makes people use a phone,” Newell said.
“I’m not saying that everybody is going to love and insist that they have a brain-computer interface. I’m just saying each person is going to decide for themselves whether or not there’s an interesting combination of feature, functionality, and price.”
But Newell warned that BCIs come with one other significant risk. He says, “Nobody wants to say, ‘Remember Bob? Remember when Bob got hacked by the Russian malware? Yeah, that sucked. Is he still running naked through the forests?’”
Is this just another step in separating us from ourselves?
The truth is we will continue to be told to ignore the implications for this type of technology and the direction in which we are heading. Because, of course, they ARE developing prosthetics, and this is an advance in scientific discovery. Still, one step forward by an agenda and a plan created long ago only brings us that much closer to losing our ability to remember. – The Organic Prepper
As for the Silview.media contribution to this report, I only have two things for you to chew on, but I think they can keep your mind busy for a very long time:
1. What if this technology can be made to work both ways and adjust your feelings to the experience?
2. What if this technology can be upscaled to the Internet of All Things and your life experience in “intelligent cities”?
3. Please enter “DARPA” in our Search utility and see how that plays out with 1. & 2.
Connor Russomanno and Joel Murphy showoff their Editor’s Choice Blue Ribbon during World Maker Faire 2015
For several years, Connor Russomanno and Joel Murphy have been designing brain-computer interfaces (BCIs) as part of their company, OpenBCI. It’s a tricky proposition; subtle brain waves can be measured, but it’s difficult to read them and even more difficult to control them. So for its latest device, the team launched a crowdfunding campaign for the BCI Ganglion, a sub-$100 device to measure brain, muscle, and heart activity. (Tracking muscles in addition to electrical signals from the scalp increases accuracy.)
They also announced the Ultracortex Mark IV, a 3D printable headset designed to hold electrodes for electrical measurements by the Ganglion. Unlike existing devices that accomplish similar data acquisition, the Ganglion and Ultracortex Mark IV are open source (hardware and software), supported by an active user community, and lower in cost by thousands of dollars.
This means whether you want to record brainwaves for research purposes or create a brain-computer interface between five friends and a flying shark, it is possible and even affordable.
In one particularly far-out project, the TransAtlantic Biodata Communication hackathon, one person wired with OpenBCI was able to control a second person also wearing the device — even on opposite sides of the ocean.
But whether it’s wacky experiments, practical home projects, or academic research, the Ganglion offers a number of tools and sensors for various applications.
Specifications
4 channel biosensors
128, 256, 512 and 1024 sample rates
Used for EEG, EMG, or ECG
Wireless BLE connection with Simblee, an Arduino compatible BLE radio module
SD card slot for local storage
Accelerometer
Connects wirelessly to the OpenBCI Processing sketch
The Ultracortex Mark IV is not ready at launch; the headset is currently in the concept stage of development. But not to worry, previous headsets from OpenBCI are compatible with the new Ganglion. Here are the design specifications the team is working on:
Simplified assembly
Higher node count (especially above the motor cortex & the visual cortex)
Increased comfort
How the Ganglion works
Interfacing the human brain with computers is all about monitoring electrical activity. The Ultracortex Mark IV holds electrodes against your head and they are wired to the Ganglion. The Ganglion monitors the electrical activity of neurons in the brain at each electrode — also known as brainwaves.
From a computing perspective, the brainwaves constitute series of analog values, which the Ganglion samples and converts to digital values. This conversion is done using a specialized chip on the Ganglion known as an analog-to-digital converter (ADC). ADC chips are common in all sorts of electronics, not just BCI devices. If you have used an Arduino to read an analog sensor values, then you have used an ADC.
The Ganglion board mounted in the Mark IV headset. Exploding out of the Mark IV are the electrode nodes.
While the ADC chip OpenBCI used in the past was extremely powerful, it accounts for much of the cost of the device. The predecessor to the Ganglion, the OpenBCI 32-bit board, used a robust Texas Instruments ADS1299 which cost a whopping $36 per unit at quantity and $58 in low volume. While the ADS1299 chip is fantastic for sampling, it was way more advanced and expensive than most people want. When Russomanno and Murphy set out to lower the cost of their BCI device, the first thing they did was find a cheaper ADC. They were able to swap out the $36 chip with a much more affordable $6 ADC.
Cutting their cost for their last BCI board by nearly $400, the OpenBCI team is pushing the expectations for high-quality, low cost science devices. Asked what defines a successfully crowdfunding campaign apart from reaching a financial goal, Russomanno explains: “It is lowering the barrier to entry” and “getting the entire OpenBCI platform so it’s approachable by a passionate high schooler or undergraduate.”
The older OpenBCI 32-bit attached to a Mark III headset
I hope the word “hackable” from the headline above stuck with you.
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
Sometimes my memes are 3D. And you can own them. Or send them to someone. You can even eat some of them. CLICK HERE
You may have heard of the famous Brain Mapping initiative by US Government / Pentagon / Darpa. It’s been widely publicized as a version of the Human Genome project, meant to bring countless health benefits. But both are proving lately to be falsely advertised.
2013
In 2018, an US journalist made a FOIA application regarding Antifa / BLM and received a surprise bonus, which went semi-viral and faded soon. I’m digging it up again for a new autopsy required in the light of the latest revelations regarding DARPA, The Brain Initiative and others. If you’re new to this site, I can’t recommend enough using the search engine to find our articles on DARPA, biohacking and mRNA technology.
And here’s the original 2018 article, that makes much more sense when you have the knowledge I pointed at just before.
Washington State Fusion Center accidentally releases records on remote mind control
Written by Curtis Waltman for Muckrock Magazine, April 18, 2018
As part of a request for records on Antifa and white supremacist groups, WSFC inadvertently bundles in “EM effects on human body.zip”
When you send thousands of FOIA requests, you are bound to get some very weird responses from time to time. Recently, we here at MuckRock had one of our most bizarre gets yet – Washington State Fusion Center’s accidental release of records on the effects of remote mind control.
Hmmm. What could that be? What does EM stand for and what is it doing to the human body? So I opened it up and took a look:
When I first saw this on Internet I wasn’t impressed much either, seemed orphan. Now you have the context.
Hell yeah, dude.
EM stands for electromagnetic. What you are looking at here is “psycho-electronic” weapons that purportedly use electromagnetism to do a wide variety of horrible things to people, such as reading or writing your mind, causing intense pain, “rigor mortis,” or most heinous of all, itching.
Now to be clear, the presence of these records (which were not created by the fusion center, and are not government documents) should not be seen as evidence that DHS possesses these devices, or even that such devices actually exist. Which is kind of unfortunate because “microwave hearing” is a pretty cool line of technobabble to say out loud.
You know what’s even cooler? “Remote Brain Mapping.” It is insanely cool to say. Go ahead. Say it. Remote. Brain. Mapping.
Just check the detail on these slides too. The black helicopter shooting off its psychotronic weapons, mapping your brain, broadcasting your thoughts back to some fusion center. I wish their example of “ELF Brain stimulation” was a little clearer though.
It’s difficult to source exactly where these images come from, but it’s obviously not government material. One seems to come from a person named “Supratik Saha,” who is identified as a software engineer, the brain mapping slide has no sourcing, and the image of the body being assaulted by psychotronic weapons is sourced from raven1.net, who apparently didn’t renew their domain.
It’s entirely unclear how this ended up in this release. It could have been meant for another release, it could have been gathered for an upcoming WSFC report, or it could even be from the personal files of an intelligence officer that somehow got mixed up in the release. A call to the WSFC went unreturned as of press time, so until we hear back, their presence remains a mystery.
We’ll keep you updated once we hear back, and you can download the files yourself on the request page. – Muckrock Magazine
I don’t know why it’s so hard for the author to link mind control to security threats control, which is in government’s job description as it is in our personal agenda, but then again, microwave hearing is “technobabble” to him.
I find it much more startling that REMOTE brain mapping is a thing!
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
We gave up on our profit shares from masks, if you want to help us, please use the donation button! We think frequent mask use, even short term use can be bad for you, but if you have no way around them, at least send a message of consciousness. Get it here!
If you’re familiar with our reports, George Church is no stranger to you either. He’s a founder figure for the Human Genome Project, CRISPR and The BRAIN Initiative. But he’s totally not getting the deserved attention, seeing that he’s just turned our world upside down. Not by himself, of course.
Meet George Church
Remember when Fauci and Big Tech joined efforts to keep us in the dark in regards to the mRNA impact on our genetics and DNA?
We’ve shown that there’s an entire new field of science that does just that: argues what Fauci said by using RNA to reprogram DNA. YouTube ad Facebook censored this.
Let’s see how are they going to argue this gentle giant of the science world and all his dark entanglements:
Professor at Harvard & MIT, co-author of 580 papers, 143 patent publications & the book “Regenesis”; developed methods used for the first genome sequence (1994) & million-fold cost reductions since (via fluor-NGS & nanopores), plus barcoding, DNA assembly from chips, genome editing, writing & recoding; co-initiated BRAIN Initiative (2011) & Genome Projects (GP-Read-1984, GP-Write-2016, PGP-2005:world’s open-access personal precision medicine datasets); machine learning for protein engineering, tissue reprogramming, organoids, xeno-transplantation, in situ 3D DNA, RNA, protein imaging.
George Church is Professor of Genetics at Harvard Medical School and Director of PersonalGenomes.org, which provides the world’s only open-access information on human Genomic, Environmental & Trait data (GET). His 1984 Harvard PhD included the first methods for direct genome sequencing, molecular multiplexing & barcoding. These led to the first genome sequence (pathogen, Helicobacter pylori) in 1994 . His innovations have contributed to nearly all “next generation” DNA sequencing methods and companies (CGI-BGI, Life, Illumina, Nanopore). This plus his lab’s work on chip-DNA-synthesis, gene editing and stem cell engineering resulted in founding additional application-based companies spanning fields of medical diagnostics ( Knome/PierianDx, Alacris, AbVitro/Juno, Genos, Veritas Genetics ) & synthetic biology / therapeutics ( Joule, Gen9, Editas, Egenesis, enEvolv, WarpDrive ). He has also pioneered new privacy, biosafety, ELSI, environmental & biosecurity policies. He is director of an IARPA BRAIN Project and NIH Center for Excellence in Genomic Science. His honors include election to NAS & NAE & Franklin Bower Laureate for Achievement in Science. He has coauthored 537 papers, 156 patent publications & one book (Regenesis).
He was part of a team of six[80] who, in a 2012 scientific commentary, proposed a Brain Activity Map, later named BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies).[81] They outlined specific experimental techniques that might be used to achieve what they termed a “functional connectome“, as well as new technologies that will have to be developed in the course of the project,[80]including wireless, minimally invasive methods to detect and manipulate neuronal activity, either utilizing microelectronics or synthetic biology. In one such proposed method, enzymatically produced DNA would serve as a “ticker tape record” of neuronal activity. – Wikipedia
Wyss Institute Will Lead IARPA-Funded Brain Mapping Consortium
January 26, 2016
(BOSTON) — The Wyss Institute for Biologically Inspired Engineering at Harvard University today announced a cross-institutional consortium to map the brain’s neural circuits with unprecedented fidelity. The consortium is made possible by a $21 million contract from the Intelligence Advanced Research Projects Activity (IARPA) and aims to discover the brain’s learning rules and synaptic ‘circuit design’, further helping to advance neurally-derived machine learning algorithms.
The consortium will leverage the Wyss Institute’s FISSEQ (fluorescent in-situ sequencing) method to push forward neuronal connectomics, the science of identifying the neuronal cells that work together to bring about specific brain functions. FISSEQ was developed in 2014 by the Wyss Core Faculty member George Church and colleagues and, unlike traditional sequencing technologies, it provides a method to pinpoint the precise locations of specific RNA molecules in intact tissue. The consortium will harness this FISSEQ capability to accurately trace the complete set of neuronal cells and their connecting processes in intact brain tissue over long distances, which is currently difficult to do with other methods.
Awarded a competitive IARPA MICrONS contract, the consortium will further the overall goals of President Obama’s BRAIN initiative, which aims to improve the understanding of the human mind and uncover new ways to treat neuropathological disorders like Alzheimer’s disease, schizophrenia, autism and epilepsy. The consortium’s work will fundamentally innovate the technological framework used to decipher the principal circuits neurons use to communicate and fulfill specific brain functions. The learnings can be applied to enhance artificial intelligence in different areas of machine learning such as fraud detection, pattern and image recognition, and self-driving car decision making.
See how the Wyss-developed FISSEQ technology is able to capture the location of individual RNA molecules within cells, which will allow the reconstruction of neuronal networks in the 3-dimensional space of intact brain tissue. Credit: Wyss Institute at Harvard University
“Historically, the mapping of neuronal paths and circuits in the brain has required brain tissue to be sectioned and visualized by electron microscopy. Complete neurons and circuits are then reconstructed by aligning the individual electron microsope images, this process is costly and inaccurate due to use of only one color (grey),” said Church, who is the Principal Investigator for the IARPA MICrONs consortium. “We are taking an entirely new approach to neuronal connectomics_immensely colorful barcodes_that should overcome this obstacle; and by integrating molecular and physiological information we are looking to render a high-definition map of neuronal circuits dedicated first to specific sensations, and in the future to behaviors and cognitive tasks.”
Church is Professor of Genetics at Harvard Medical School, and Professor of Health Sciences and Technology at Harvard and MIT.
To map neural connections, the consortium will genetically engineer mice so that each neuron is barcoded throughout its entire structure with a unique RNA sequence, a technique called BOINC (Barcoding of Individual Neuronal Connections) developed by Anthony Zador at Cold Spring Harbor Laboratory. Thus a complete map representing the precise location, shape and connections of all neurons can be generated.
The key to visualizing this complex map will be FISSEQ, which is able to sequence the total complement of barcodes and pinpoint their exact locations using a super-resolution microscope. Importantly, since FISSEQ analysis can be applied to intact brain tissue, the error-prone brain-sectioning procedure that is part of common mapping studies can be avoided and long neuronal processes can be more accurately traced in larger numbers and at a faster pace.
In addition, the scientists will provide the barcoded mice with a sensory stimulus, such as a flash of light, to highlight and glean the circuits corresponding to that stimulus within the much more complex neuronal map. An improved understanding of how neuronal circuits are composed and how they function over longer distances will ultimately allow the team to build new models for machine learning.
The multi-disciplinary consortium spans 6 institutions. In addition to Church, the Wyss Institute’s effort will be led by Samuel Inverso, Ph.D., who is a Staff Software Engineer and Co-investigator of the project. Complementing the Wyss team, are co-Principal Investigators Anthony Zador, Ph.D., Alexei Koulakov, Ph.D., and Jay Lee, Ph.D., at Cold Spring Harbor Laboratory. Adam Marblestone, Ph.D., and Liam Paninski, Ph.D. are co-Investigator at MIT and co-Principal Investigator at Columbia University, respectively. The Harvard-led consortium is partnering with another MICrONS team led by Tai Sing Lee, Ph.D. of Carnegie Mellon University as Principal investigator under a separate multi-million contract, with Sandra Kuhlman, Ph.D. of Carnegie Mellon University and Alan Yuille, Ph.D. of Johns Hopkins University as co-Principal investigators, to develop computational models of the neural circuits and a new generation of machine learning algorithms by studying the behaviors of a large population of neurons in behaving animals, as well as the circuitry of the these neurons revealed by the innovative methods developed by the consortium.
“It is very exciting to see how technology developed at the Wyss Institute is now becoming instrumental in showing how specific brain functions are wired into the neuronal architecture. The methodology implemented by this research can change the trajectory of brain mapping world wide,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences. – WYSS Institute
IARPA is CIA’s DARPA. DARPA IS RAN BY PENTAGON AND IARPA BY CIA. IARPA IS EVEN MORE SECRETIVE, DARING AND SOCIOPATHIC.
Machine Intelligence from Cortical Networks (MICrONS)
Intelligence Advanced Research Projects Activity (IARPA)
Brain Research through Advancing Innovative Neurotechnologies. (BRAIN)
Full Rosetta brains in situ A. Activity (MICrONS = Ca imaging) (Alternative=Tickertape, see figure to right) B. Behavior (MICrONS & Alt = traditional video) C. Connectome (MICrONS & Alt = BOINC via Cas9-barcode) D. Developmental Lineage (via Cas9-barcode) E. Expression (RNA & Protein via FISSEQ)
Flagship Pioneering’s Scientists Invent a New Category of Genome Engineering Technology: Gene Writing
Tessera Therapeutics emerges from three years of stealth operations to pioneer Gene Writing™ as a new genome engineering technology and category of genetic medicine
CAMBRIDGE, Mass., July 7, 2020 /PRNewswire/ — Flagship Pioneering today announced the unveiling of Tessera Therapeutics, Inc. a new company with the mission of curing disease by writing in the code of life. Tessera is pioneering Gene Writing™, a new biotechnology that writes therapeutic messages into the genome to treat diseases at their source.
Tessera’s Gene Writing platform is a potentially revolutionary breakthrough for genetic medicine that addresses key limitations of gene therapy and gene editing. Gene Writing technology can alter the genome by efficiently inserting genes and exons (parts of genes), introducing small insertions and deletions, or changing single or multiple DNA base pairs. The technology could enable cures for diseases that arise from errors in the genome, including monogenic disorders. It could also allow precise gene regulation in other diseases such as neurodegenerative diseases, autoimmune disorders, and metabolic diseases.
“While profound advancements in genetic medicine over the last two decades had therapeutic promise for many previously untreatable diseases, the intrinsic properties of existing gene therapy and editing have significant shortcomings that limit their benefits to patients,” says Noubar Afeyan, Ph.D., founder and CEO of Flagship Pioneering and Chairman of Tessera Therapeutics. “Our scientists have invented a new technology, called Gene Writing, that has the ability to write therapeutic messages into the genomes of somatic cells. We created Tessera to pioneer its applications for medicine. However, the breakthrough is broad and could be applied to many different genomes from humans to plants to microorganisms.”
A New Era of Genetic Medicine
Geoffrey von Maltzahn, Ph.D., an MIT-trained biological engineer; Jacob Rubens, Ph.D., an MIT-trained synthetic biologist; and other scientists at Flagship Labs, the enterprise’s innovation foundry, co-founded Tessera in 2018 to create a platform that could design, make, and launch Gene Writing medicines. A General Partner at Flagship Pioneering, von Maltzahn has co-founded numerous biotechnology companies, including Sana Biotechnology, Indigo Agriculture, Kaleido Biosciences, Seres Therapeutics, and Axcella Health.
“DNA codes for life. But sometimes our DNA is written improperly, driving an enormous variety of diseases,” says von Maltzahn, Tessera’s Chief Executive Officer. “We started Tessera Therapeutics with a simple question: ‘What if Nature evolved a better solution than CRISPR for inserting curative therapeutic messages into the genome?’ It turns out that engineered and synthetic mobile genetic elements offer the potential to go beyond the limitations of gene editing technologies and allow Gene Writing. Our outstanding team of scientists is focused on bringing the vast promise of this new technology category to patients.”
Mobile genetic elements, the inspiration for Gene Writing, are evolution’s greatest genomic architect. The first mobile genetic element was discovered by Barbara McClintock, who won the 1983 Nobel Prize for revealing the mobile nature of genes. Mobile genetic elements code for the machinery to move or copy themselves into a new location in the genome, and they have been selected over billions of years to autonomously and efficiently “write” their DNA into new genomic sites. Today, mobile genetic elements are among the most abundant and ubiquitous genes in nature.
Over the past two years, Tessera has been mining genomes to discover novel mobile genetic elements and engineering them to create Gene Writing technology.
Tessera’s Gene Writers write therapeutic messages into the genome using RNA or DNA templates. RNA-based Gene Writing uses an RNA template and Gene Writer protein to either write a new gene into the genome or guide the rewriting of a pre-existing genomic sequence to make a small substitution, insertion, or deletion. DNA-based Gene Writing uses a DNA template to write a new gene into the genome.
By harnessing the biology of mobile genetic elements, Gene Writing holds the potential to overcome the limitations of current genetic medicine approaches by:
Efficiently writing small and large alterations to the genome of somatic cells with minimal reliance upon host DNA repair pathways, unlike nuclease-based gene editing technologies.
Permanently adding new DNA to dividing cells, unlike AAV-based gene therapy technologies.
Writing new DNA sequences into the genome by delivering only RNA.
Allowing repeated administration of treatments to patients in order to dose genetic medicines to effect, which is not possible with current gene therapies.
Tessera has licensed Flagship Pioneering’s intellectual property estate, which was begun in 2018 with seminal patent filings supporting both RNA and DNA Gene Writing technologies.
Tessera’s Scientific Advisory Board includes Luigi Naldini, David Schaffer, Andrew Scharenberg, Nancy Craig, George Church, Jonathan Weissman, and John Moran, who collectively have decades of experience in developing gene therapies and gene editing technologies, and also have commercial expertise from 4D, UniQure, Casebia, Cellectis, Magenta, and Editas. Tessera’s Board of Directors includes John Mendlein, Flagship Executive Partner and former CEO of multiple companies; Melissa Moore, Chair of Tessera’s Scientific Advisory Board, Chief Scientific Officer of Moderna, member of the National Academy of Sciences, and founding co-director of the RNA Therapeutics Institute; Geoffrey von Maltzahn; and Noubar Afeyan. The 30-person R&D team at Tessera has deep genetic medicine and startup expertise, including alumni from Editas, Intellia, Beam, Casebia, and Moderna.
About Tessera Therapeutics Tessera Therapeutics is an early-stage life sciences company pioneering Gene Writing™, a new biotechnology designed to offer scientists and doctors the ability to write and rewrite small and large therapeutic messages into the genome, thereby curing diseases at their source. Gene Writing holds the potential to become a new category in genetic medicine, building upon recent breakthroughs in gene therapy and gene editing, while eliminating important limitations in their reach, utilization and efficacy. Tessera Therapeutics was founded by Flagship Pioneering, a life sciences innovation enterprise that conceives, resources, and develops first-in-class category companies to transform human health and sustainability.
About Flagship Pioneering Flagship Pioneering conceives, creates, resources, and develops first-in-category life sciences companies to transform human health and sustainability. Since its launch in 2000, the firm has applied a unique hypothesis-driven innovation process to originate and foster more than 100 scientific ventures, resulting in over $34 billion in aggregate value. To date, Flagship is backed by more than $4.4 billion of aggregate capital commitments, of which over $1.9 billion has been deployed toward the founding and growth of its pioneering companies alongside more than $10 billion of follow-on investments from other institutions. The current Flagship ecosystem comprises 41 transformative companies, including Axcella Health (NASDAQ: AXLA), Denali Therapeutics (NASDAQ: DNLI), Evelo Biosciences (NASDAQ: EVLO), Foghorn Therapeutics, Indigo Ag, Kaleido Biosciences (NASDAQ: KLDO), Moderna (NASDAQ: MRNA), Rubius Therapeutics (NASDAQ: RUBY), Sana Biotechnology, Seres Therapeutics (NASDAQ: MCRB), and Syros Pharmaceuticals (NASDAQ: SYRS). – Flagship Pioneering
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
No more device batteries? Researchers at Georgia Institute of Technology’s ATHENA lab discuss an innovative way to tap into the over-capacity of 5G networks, turning them into “a wireless power grid” for powering Internet of Things (IoT) devices. The breakthrough leverages a Rotman lens-based rectifying antenna capable of millimeter-wave harvesting at 28 GHz. The innovation could help eliminate the world’s reliance on batteries for charging devices by providing an alternative using excess 5G capacity. – Georgia Tech, March 2021
We Could Really Have a Wireless Power Grid That Runs on 5G
This tech might make us say goodbye to batteries for good.
POPULAR MECHANICSAPR 30, 2021COURTESY OF CHRISTOPHER MOORE / GEORGIA TECH
Researchers at Georgia Tech have come up with a concept for a wireless power grid that runs on 5G’s mm-wave frequencies.
Because 5G base stations beam data through densely packed electromagnetic waves, the scientists have designed a device to capture that energy.
The star of the show is a specialized Rotman lens that can collect 5G’s electromagnetic energy from all directions.
If you’ve ever owned a Tile tracker—a square, white Bluetooth beacon that connects to your phone to help keep tabs on your wallet, keys, or whatever else you’re prone to losing—you’re familiar with low-power Internet-of-Things (IoT) devices.
Just like other small IoT devices, from voice assistants to tiny chemical sensors that can detect gas leaks, Tile trackers require a power source. It’s not realistic to hook these gadgets up to a wall outlet, and having to constantly change batteries is a waste of time that’s ultimately bad for the environment.
But what if you could wirelessly charge those devices with a power source that’s already all around you? Researchers at Georgia Tech have dreamed up this kind of “wireless power grid” with a small device that harvests the electromagnetic energy that 5G base stations routinely emit.
Just like the 3G and 4G cell phone towers that came before, 5G base stations radiate electromagnetic energy. At the moment, we’re only harnessing these precious bands of energy to transfer data (which helps you download your favorite Netflix series at lightning speeds).This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.
With some crafty engineering, it’s possible to use 5G’s waves of energy as a form of wireless power, says Manos Tentzeris, Ph.D., a professor of flexible electronics at Georgia Tech. He leads the university’s ATHENA research group, where his team has fabricated a specialized Rotman lens “rectenna” that makes this energy collection possible.
If the idea takes off, this tiny device—which is really a small, high-tech sticker—can use the wireless power grid to charge up far more devices than just your Tile tracker. Your cell phone providers could start beaming out electricity to power all kinds of small electronics, from delivery drones to tracking tags for pallets in a “smart warehouse.” The possibilities are truly endless.
“If you’re talking about real-world implementation of all of these ambitious projects, such as IoT, smart cities, or digital twins … you need to have wireless sensors everywhere,” Tentzeris tells Pop Mech. “But currently, all of them need to have batteries.”
But Wait, How Does 5G Create Power?
Let’s start out with the basics: 5G technically is energy.
5G can seem like a black box to those of us who aren’t electrical engineers, but the premise hinges on something we can all understand: electromagnetic energy. Consider the visible spectrum, or all of the light you can see. It exists along the larger electromagnetic spectrum, but it’s really just a blip.
In the graphic below, you can see the visible spectrum is just between ultraviolet and infrared light, or between 400 and 700 nanometers. As energy increases along the electromagnetic spectrum, the waves become shorter and shorter—notice gamma rays are far more powerful, and have more densely packed waves than FM radio, for example. Human eyes can’t detect these waves of energy.
PRINCIPLES OF STRUCTURAL CHEMISTRY
5G is also invisible and operates at a higher frequency than other communication standards we’re used to, like 3G or 4G. Those networks work at frequencies between about 1 to 6 gigahertz, while experts say 5G sits closer to the band between 24 and 90 gigahertz.
Because 5G waves function at a higher frequency, they’re more powerful, but also shorter in length. This is the primary reason why new infrastructure (like small 5G cells installed on utility poles) is required for 5G deployment: the waves have different characteristics. Shorter waves, for example, will see more interference from objects like trees and skyscrapers, and even droplets of rain or flakes of snow.
But don’t think of a city’s constellation of 5G base stations as wasteful. Old standards, like 3G and 4G, are known for indiscriminately emitting power from massive service towers in all directions, beaming significant amounts of untapped energy. 5G base stations are much more efficient, says Jimmy Hester, Ph.D., a Georgia Tech alum who serves as senior lab advisor to the ATHENA group.
“Because they operate at high frequencies, [5G base stations] are much better able to focalize [power]. So there’s less waste in a sense,” Hester tells Pop Mech. “What we’re talking about is more of an intentional energization of the devices, themselves, by focalizing the beam towards the device in order to turn it on and power it.”
A ‘Tarantula’ Lens Takes Shape
The Rotman lens, pictured at the far right, can collect energy from multiple directions. IMAGE COURTESY OF GEORGIA TECH’S ATHENA GROUP
There’s a drawback to this efficient focalization: 5G base stations transmit energy in a limited field of view. Think of it like a beam of energy moving in one direction, rather than a circle of energy emanating from a tower. The researchers call it a “pencil beam.” How could a small device precisely snatch up energy from all of these scattered base stations, especially when you can’t see the direction in which the waves are traveling?
Enter the Rotman lens, the key technology behind the team’s breakthrough energy-harvesting device. You can see Rotman lenses at work in military applications, like radar surveillance systems meant to identify targets in all directions without having to actually move the antenna. This isn’t the prototypical lens you’re used to seeing in a pair of glasses or in a microscope. It’s a flexible lens with metal backing, the team explains in a new research paper published in Scientific Reports.
“THE LENS IS LIKE A TARANTULA…[IT] CAN LOOK IN SIX DIFFERENT DIRECTIONS.”
“The same way the lens in your camera collects all of the [light] waves from any direction, and combines it to one point…to create an image, that’s exactly how [this] lens works,” Aline Eid, a Ph.D. student and senior researcher at the ATHENA lab, tells Pop Mech. “The lens is like a tarantula … because a tarantula has six eyes, and our system can also look in six different directions.”
The Rotman lens increases the energy collecting device’s field of view from the “pencil beam” of about 20 degrees to more than 120 degrees, Eid says, making it easier to collect millimeter-wave energy in the 28-gigahertz band. So even if you slapped the sticker onto a moving drone, you could still reliably collect energy from 5G base stations all over a city.
“If you stick these devices on a window, or if you stick these devices on a light pole, or in the middle of an orchard, you’re not going to know the map of the strongest-power base stations,” Tentzeris explains. “We had to make our harvesting devices direction agnostic.”
Your Cell Phone Plan, Reimagined
COURTESY OF CHRISTOPHER MOORE / GEORGIA TECH
Tentzeris says he and his colleagues are looking for funding and eager to work with telecom companies. It makes sense: these companies could integrate the rectenna stickers around cities to augment the 5G networks they’re already building out. The end result could be a sort of new-age cell phone plan.
“In the beginning of the 2000s, companies moved from voice to data. Now, using this technology, they can add power to data/communication as well,” Tentzeris says.
Right now, the rectenna stickers can’t collect a huge amount of power—just about 6 microwatts of electricity, or enough to power some small IoT devices, from 180 meters away. But in lab tests, the device is still able to gather about 21 times more energy than similar devices in development.This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.
Plus, accessibility is on the team’s side, since the system is fully printable. Tentzeris says it only costs a few cents to produce one unit through additive manufacturing. With that in mind, he says it’s possible to embed the rectenna sticker into a wearable or even stitch it into clothing.
“Scalability was very important, you’re talking about billions of devices,” Tentzeris says. “You could have a great prototype working in the lab, but when somebody asks, ‘Can everybody use it?’ you need to be able to say yes.” – POPULAR MECHANICS 2021
This is antiquated stuff by 2021 standards, but gives you an idea. Initially, much of the nanotech was powered by the body electricity, so it had very limited capabilities. 5G could power true robots.
ATHENA (Agile Technologies for High-performance Electromagnetic Novel Applications)
The ATHENA (Agile Technologies for High-performance Electromagnetic Novel Applications) group at Georgia Tech, led by Dr. Manos Tentzeris, explores advances and development of novel technologies for electromagnetic, wireless, RF and mm-wave applications in the telecom, defense, space, automotive and sensing areas.
In detail, the research activities of the 15-member group include Highly Integrated 3D RF Front-Ends for Convergent (Telecommunication,Computing and Entertainment) Applications, 3D Multilayer Packaging for RF and Wireless modules, Microwave MEM’s, SOP-integrated antennas (ultrawideband, multiband, ultracompact) and antenna arrays using ceramic and conformal organic materials and Adaptive Numerical Electromagnetics (FDTD, MultiResolution Algorithms).
The group includes the RFID/Sensors subgroup which focuses on the development of paper-based RFID’s and RFID-enabled “rugged” sensors with printed batteries and power-scavenging devices operating in a variety of frequency bands [13.56 MHz-60 GHz]. In addition, members of the group deal with Bio/RF applications (e.g. breast tumor detection), micromachining (e.g elevated patch antennas) and the development of novel electromagnetic simulator technologies and its applications to the design and optimization of modern RF/Microwave systems.
The numerical activity of the group primarily includes the finite-difference time-domain (FDTD) and multiresolution time-domain (MRTD) simulation techniques. It also covers hybrid numerical simulators capable of modeling multiple physical effects, such as electromagnetics and mechanical motion in MEMS devices and the combined effect of thermal, semiconductor electron transport, and electromagnetics for RF modules containing solid state devices.
The group maintains a 32 processor Linux Beowulf cluster to run its optimized parallel electromagnetic codes. In addition, the group uses these codes to develop novel microwave devices and ultracompact multiband antennas in a number of substrates and utilizes multilayer technology to miniaturize the size and maximize performance. Examples of target applications include cellular telephony (3G/4G), WiFi, WiMAX, Zigbee and Bluetooth, RFID ISO/EPC_Gen2, LMDS, radar, space applications, millimeter-wave sensors and surveillance devices and emerging standards for frequencies from 800MHz to 100GHz.
The activities are sponsored by NSF, NASA, DARPA and a variety of US and international corporations. – ATHENA
Smart implants designed for monitoring conditions inside the body, delivering drug doses, or otherwise treating diseases are clearly the future of medicine. But, just like a satellite is a useless hunk of metal in space without the right communication channels, it’s important that we can talk to these implants. Such communication is essential, regardless of whether we want to relay information and power to these devices or receive data in return.
Fortunately, researchers from Massachusetts Institute of Technology (MIT) and Brigham and Women’s Hospital may have found a way to help. Scientists at these institutes have developed a new method to power and communicate with implants deep inside the human body.
“IVN (in-vivo networking) is a new system that can wirelessly power up and communicate with tiny devices implanted or injected in deep tissues,” Fadel Adib, an assistant professor in MIT’s Media Lab, told Digital Trends. “The implants are powered by radio frequency waves, which are safe for humans. In tests in animals, we showed that the waves can power devices located 10 centimeters deep in tissue, from a distance of one meter.”
These same demonstration using pigs showed that it is possible to extend this one-meter range up to 38 meters (125 feet), provided that the sensors are located very close to the skin’s surface. These sensors can be extremely small, due to their lack of an onboard battery. This is different from current implants, such as pacemakers, which have to power themselves since external power sources are not yet available. For their demo, the scientists used a prototype sensor approximately the size of a single grain of rice. This could be further shrunk down in the future, they said.
“The incorporation of [this] system in ingestible or implantable device could facilitate the delivery of drugs in different areas of the gastrointestinal tracts,” Giovanni Traverso, an assistant professor at Brigham and Women’s Hospital and Harvard Medical School, told us. “Moreover, it could aid in sensing of a range of signals for diagnosis, and communicating those externally to facilitate the clinical management of chronic diseases.”
The IVN system is due to be shown off at the Association for Computing Machinery Special Interest Group on Data Communication (SIGCOMM) conference in August.
Buh-bye, Human race, you’ve just been assimilated by the Borg!
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
Initially I didn’t pay much attention to these reports because first ones were pretty vague and seemed unsubstantiated. They kind of were. But then they started to become more and more detailed, coherent and very specific. My own research on #biohacking started to intersect more often, to the point where today they almost coincide.
Video by Tim Truth
To better understand where I’m coming from, your journey needs to start here:
SOUTH SAN FRANCISCO, Calif., July 12, 2016 /PRNewswire/ — Profusa, Inc., a leading developer of tissue-integrated biosensors, today announced that it was awarded a $7.5 million dollar grant from the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office (ARO) to develop implantable biosensors for the simultaneous, continuous monitoring of multiple body chemistries. Aimed at providing real-time monitoring of a combat soldier’s health status to improve mission efficiency, the award supports further development of the company’s biosensor technology for real-time detection of the body’s chemical constituents. DARPA and ARO are agencies of the U.S. Department of Defense focused on the developing emerging technologies for use by the military.
“Profusa’s vision is to replace a point-in-time chemistry panel that measures multiple biomarkers, such as oxygen, glucose, lactate, urea, and ions with a biosensor that provides a continuous stream of wireless data,” said Ben Hwang, Ph.D., Profusa’s chairman and chief executive officer. “DARPA’s mission is to make pivotal investments in breakthrough technologies for national security. We are gratified to be awarded this grant to accelerate the development of our novel tissue-integrating sensors for application to soldier health and peak performance.”
Tissue-integrating Biosensors for Multiple Biomarkers Supported by DARPA, ARO and the National Institutes of Health, Profusa’s technology and unique bioengineering approach overcomes the largest hurdle in long-term use of biosensors in the body: the foreign body response. Placed just under the skin with a specially designed injector, each tiny biosensor is a flexible fiber, 2 mm-to-5 mm long and 200-500 microns in diameter. Rather than being isolated from the body, Profusa’s biosensors work fully integrated within the body’s tissue — without any metal device or electronics — overcoming the effects of the foreign body response for more than one year.
Each biosensor is comprised of a bioengineered “smart hydrogel” (similar to contact lens material) forming a porous, tissue-integrating scaffold that induces capillary and cellular in-growth from surrounding tissue. A unique property of the smart gel is its ability to luminesce upon exposure to light in proportion to the concentration of a chemical such as oxygen, glucose or other biomarker.
“Long-lasting, implantable biosensors that provide continuous measurement of multiple body chemistries will enable monitoring of a soldier’s metabolic and dehydration status, ion panels, blood gases, and other key physiological biomarkers,” said Natalie Wisniewski, Ph.D., the principal investigator leading the grant work and Profusa’s co-founder and chief technology officer. “Our ongoing program with DARPA builds on Profusa’s tissue-integrating sensor that overcomes the foreign body response and serves as a technology platform for the detection of multiple analytes.”
Lumee Oxygen Sensing System™ Profusa’s first medical product, the Lumee Oxygen Sensing System, is a single-biomarker sensor designed to measure oxygen. In contrast to blood oxygen reported by other devices, the system incorporates the only technology that can monitor local tissue oxygen. When applied to the treatment of peripheral artery disease (PAD), it prompts the clinician to provide therapeutic action to ensure tissue oxygen levels persist throughout the treatment and healing process.
Pending CE Mark, the Lumee system is slated to be available in Europe in 2016 for use by vascular surgeons, wound-healing specialists and other licensed healthcare providers who may benefit in monitoring local tissue oxygen. PAD affects 202 million people worldwide, 27 million of whom live in Europe and North America, with an annual economic burden of more than $74 billion in the U.S. alone.
Profusa, Inc. Profusa, Inc., based in South San Francisco, Calif., is leading the development of novel tissue-integrated sensors that empowers an individual with the ability to monitor their unique body chemistry in unprecedented ways to transform the management of personal health and disease. Overcoming the body’s response to foreign material for long-term use, its technology promises to be the foundational platform of real-time biochemical detection through the development of tiny bioengineered sensors that become one with the body to detect and continuously transmit actionable, medical-grade data for personal and medical use. See http://www.profusa.com for more information.
The research is based upon work supported by DARPA, the Biological Technologies Office (BTO), and ARO grant [W911NF-16-1-0341]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, BTO, the ARO, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
So I can’t say with 100% certainty that what DARPA did and what people found are one and the same thing, but this hits close enough, if this is possible, that is possible, and altogether give 200% x reasons to freak out.
I will keep adding resources and details here, but my point is made.
To be continued? Our work and existence, as media and people, is funded solely by our most generous supporters. But we’re not really covering our costs so far, and we’re in dire needs to upgrade our equipment, especially for video production. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
Sometimes my memes are 3D. And you can own them. Or send them to someone. You can even eat some of them. CLICK HERE