by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

We are currently discussing with a little lawyer house the possibility of suing Reuters and Facebook for defamation. It’s mainly a financial issue, we’re living in a pay/survival and pay/justice world, but we may find a way.
UPDATE: The deal we were first discussing folded, but it happened just when a better one came up. We’re very close to get legal representation and open a case against Reuters, for a starter. Stay tuned!

READ OUR ORIGINAL ARTICLE:
ATOMIC BOMBSHELL: Rothschilds patented Covid-19 biometric tests in 2015. And 2017.

Reuters is not just the #1 news agency in the world, providing a huge chunk of the BS you hear every day on TV and radio. Reuters is also prime fact-checker for Facebook, acting like some sort of elite Snopes.

… by “elite” I meant “lowest scum” earlier, talking about Reuters.


And since our article on the Rothschild biometric Covid tests went viral, Facebook and Reuters collaborated to suppress it, censor it, defame us and obstruct public access to highly important information.
Let’s just take every sentence from the Reuters defamation piece and perform an autopsy.

The website points to a Dutch website that shows a patent for a “System and Method for Testing for COVID-19″ (here) .
FALSE: THAT’S ON OF THREE DIFFERENT REGISTRIES WE LINK TO, AND IT’S NOT JUST “A DUTCH WEBSITE”, IS THE OFFICIAL GOVERNMENT REGISTRY, THE ULTIMATE AUTHORITY IN THE FIELD. THAT DOWNPLAYING IS INTENTIONAL AND DENOTES DEFAMATION INTENTIONS.

The patent is numbered ‘US20200279585A1’ and has a “Prioriteitsdatum” (Dutch for “priority date”) of “2015-10-13”.
NAILED THIS ONE, YOU CAN COPY-PASTE, HIGH FIVE!

The article claims that the 2015 priority date is evidence that the coronavirus pandemic has been planned.
QUOTE OR IT NEVER HAPPENED. BUT IT’S NICE THAT YOU BROUGHT IT UP

We laughed at it earlier and made memes because we were indeed under heavy bans, but our article was thriving and his tripe was so representative for astroturfers or morons who believe Facebook invests hundreds of millions in fact-checkers to promote truth. 18 days later it’s almost like his twitt was the only “research” Reuters has ever performed.

But the author has conflated the terms “priority date” and “application date”.
WE NEVER THOUGHT OF IT BEFORE YOU DID.

The priority date can refer to the earliest filing date in a family of related patent applications, or to the earliest filing date of a particular feature of an invention (here) .
ACTUAL QUOTE FROM THE LINK THEY PROVIDE: “Priority date refers to the earliest filing date in a family of patent applications.”
“CAN BE” VS “IS”. DID THEY JUST ARGUE AGAINST WHAT IS WITH WHAT ASSUME IT CAN BE?! :)))))))
“FAMILY OF PATENTS APPLICATIONS” CAN ALSO BE A SERIES OF SUCCESSIVE IMPROVEMENTS OF A PATENT, AND THIS IS THE CASE HERE.

In this case, Oct. 13, 2015 is when Rothschild first made a provisional application within this family of patents.
FALSE. ALL IT TAKES IS TO ACTUALLY CLICK THOSE LINKS AND READ THE CONTENT, BUT THEY HOPE YOU DON’T. IT’S THE SAME PATENT, IN AN EARLIER STAGE.

Wham-bam, we control your world-view, m’am!

A series of regular, non-provisional patent applications were subsequently made for a “System and Method For Using, Processing, and Displaying Biometric Data” (here) .
FALSE: THEY ARE INCREMENTAL MODIFICATIONS OF THE SAME PATENTS, AS THE LINK THEY PROVIDE SHOWS AND ANYONE CAN SEE

These earlier patents are essentially the predecessors to ‘US20200279585A1’ – and as such share similar features, such as the use of biometric data (here) .
THAT’S WHAT WE SAID EXCEPT WE DIDN’T LIE AND DOWNPLAY IT CLAIMING THEY JUST “SHARE FEATURES” WHEN IT’S THE SAME THING WITH SMALL INCREMENTAL IMPROVEMENTS

However, the patent for a system that analyses biometric data to determine whether the user is suffering from COVID-19 was not applied for until May 17, 2020 (here).
FALSE, THAT IS NOT A NEW PATENT, THAT’S JUST THE LAST UPDATE TO THE ONE FILED IN 2015, WHEN THEY ADDED “COVID” TO THE NAME/SPECIFICATIONS AND DID THE FINAL TWEAKS FOR THE NEW MARKET, AS THE LINKS SHOWS

The article also claims to provide evidence of a patent for COVID-19 testing being filed for in 2017.
QUOTE OR IT NEVER HAPPENED. WHAT HAPPENED IS THIS TITLE, QUOTE:
“THIRD REGISTRATION: US, 2017 (ACTUALIZATION FROM 2015)”

It references the patent for a “System and Method for Using, Biometric, and Displaying Biometric Data” and its filing date of April 24, 2017 (here) .
FALSE. IT REFERENCES THE SAME PATENT, AT WHATEVER STAGE OF DEVELOPMENT WAS THEN. NAMES CAN CHANGE, THE CONTENT DOESN’T MUCH. AND REUTERS NEVER MENTIONS THE ACTUAL CONTENT.

As already discussed, although this patent is indeed a predecessor to ‘US20200279585A1’, it does not mention COVID-19 in any form.
OBVIOUS STRAW-MAN, WE NEVER CLAIMED IT MENTIONS COVID-19, WE SHOW THAT THE INVENTOR CLAIMS HIS 2015 INVENTION TESTS FOR COVID IN 2020.

reuters VERDICT

False. The year 2015 was when Rothschild first filed a provisional application within the family of patents. The year 2017 is the filing date of a related, but separate patent within the family.

SILVIEW.MEDIA VERDICT

THERE IS ONLY ONE TRUTHFUL PARAGRAPH IN REUTERS’ ARTICLE AND THIS VERDICT IS NOT IT. BY ACCESSING THE LINKS THEY PROVIDE YOU CAN VERIFY IT’S ALL THE SAME PATENT, SEE FOR YOURSELVES, DON’T EAT PRE-CHEWED GARBAGE, ALWAYS REMEMBER OUR MOTTO:
DON’T BELIEVE WHAT WE SAY, RESEARCH WHAT WE SAY AND MAKE UP YOUR OWN MINDS!

PENALTY KICK: IT’S ROTHSCHILD AND BIDEN WEEK-END ON ALL SILVIEW.MEDIA NETWORK, WHICH IS NOT AS LARGE, BUT EXTENDS WAY BEYOND ROTHSCHILD MEDIA

We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

If you collapsed the world based on his test results, what are you going to do based on the infomation he presents below?

1. PCR tests are not a good tool for medical diagnose and shouldn’t be used as such
2. AIDS science is a fraud
3. Climatology is a “Joke”

HIS EULOGY IN SPANISH TOP NEWSPAPER EL PAIS

Kary Mullis grew up throwing frogs to the sky with homemade rockets, he studied chemistry, left science for a couple of years to work in a bakery, earned a doctorate at the University of California at Berkeley in the heat of psychedelic drug fever and eventually invented, while driving his car, a technique that It marked a before and after in biology: the polymerase chain reaction, a kind of molecular photocopying that allows you to copy a small segment of DNA millions of times. Its revolutionary discovery allowed us to read the human genome, diagnose genetic disorders, identify corpses and hunt serial killers for their DNA. Mullis, born in 1944 in Lenoir (USA), eventually won the Nobel Prize in Chemistry in 1993. He died on August 7 from pneumonia in the Californian city of Newport Beach, according to has explained his widowNancy Cosgrove, to the newspaper The Washington Post.

The same American newspaper said in 1998 that Mullis was “possibly the strangest person who has never won a Nobel Prize in Chemistry ”. In 1994, just one year after winning the prize, the researcher visited Spain to give the closing talk of the congress of the European Society for Clinical Research, in Toledo, but refused to talk about his great discovery. Instead, he decided to disseminate his theory that AIDS is not caused by a virus, but arises from exposure to many other pathogens.Mullis told in his autobiography that one night he spoke in a forest with “a bright raccoon”, perhaps “extraterrestrial”

“Mullis started laughing at his audience by commenting that he was going to Seville” where there is some kind of festival in which one gets drunk all night. ” He illustrated the principle of his intervention, cumbersome and confusing, with photographs taken by him of geometric images projected on naked women ”, The country then recounted. Mullis, a genius in his field, showed that a nobel It can be a real songwriter outside your discipline.

The French virologist Françoise Barré-Sinoussi, who discovered HIV in 1983, talked about Mullis in an interview with this newspaper Two years ago. “I have never talked with him. I refuse to talk to people who say idiocy, ”said the researcher. “Scientific data has clearly demonstrated the link between the virus and the disease. These types of statements are dangerous. There are patients who have stopped treatment because of these observations and have fallen ill. You have to stop them, because they are dangerous, ”he added.

Mullis published his autobiography, Dancing Naked in the Mind Field (“Dancing naked in the field of mind”), in 1998. In the book, the chemist tells us that one night in 1985 he met “a bright raccoon” in a forest he owned in Mendocino County, California . “Good afternoon, doctor,” the raccoon greeted him, according to Mullis’s delirious story. “To say they were aliens is a lot to say. But to qualify him as a stranger would be to underestimate him, ”reflected the Nobel winner.The discoverer of HIV, Françoise Barré-Sinoussi, described Mullis’ denial speech as “dangerous”

The polymerase chain reaction, known as PCR, changed science forever. Each cell keeps in its tiny nucleus two meters of DNA folded in an inconceivable way. There is written the operating manual of life. Until 1985, scientists needed huge amounts of DNA to be able to analyze genetic information. But, that year, Mullis conceived a new strategy. When the DNA molecule was heated, its two complementary chains – which are usually curled up like a spiral staircase – were separated. By adding the fundamental bricks of DNA, and with the help of an enzyme, each independent chain served as a template to generate the complement and give rise to a perfect photocopy of the original molecule. That way I could have millions of copies in no time. According to Mullis, he had his eureka moment while driving his car from Emeryville, where he worked at the Cetus company, to his farm in Mendocino, which he thought he saw a luminous raccoon and talkative raccoon, perhaps extraterrestrial.

The American chemist, who dedicated himself to surfing after winning the Nobel Prize, always boasted of swimming against the current. In a TED talk in 2002 Mullis recalled that the idea of ​​PCR came to him in 20 minutes and that if he had listened to his molecular biologist friends he would have abandoned it as impossible. “If I had to seek an authority in the matter to ask if the idea would work, I would have said no,” said the chemist. That same attitude towards the scientific consensus led him to deny the existence of the AIDS virus and also that of global warming, an invention of “parasites with degrees in economics or sociology.”

Mullis always knew that he would win the Nobel. In his book Dancing Naked in the Mind Field, the chemist says that his mentor in Berkeley, Joe Neilands, warned him in 1993 that he could take the prize that same year. The old biochemist, 23 years older than Mullis, recommended that he not talk so much with the press to avoid ruining his candidacy. “Neilands told me that probably nothing was wrong because he admitted that I love surfing and women, but he thought that the (Nobel) committee could frown at the fact that I admitted to having taken LSD. Surfing, women and LSD could be too much, ”Mullis recalled in his autobiography. “We both knew I wouldn’t shut up.”
El Pais, 2019 (Spanish)


We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

One of the most maleficent characters in Trump’s menagerie is this psychopath he named as leader of Operation Warp Speed, Moncef Slaoui, former GSK and Moderna boss having a bigger body count than the Spanish Flu. Actually Kushner picked him in Trumps name, but anyway, after we wrote extensive viral exposes on his past, a team of “specialists” brushed up his online presence and then he laid low for a while. But his silence is over and his newest interviews confirm everything we’ve wrote about him and Covid-19.

For the best understanding of this article, you have to read it as a follow up to four previous pieces that are anyway essential readings:

TRUMP’S NEW MOROCCAN “VACCINE CZAR”: WORKED FOR BILL GATES, GOOGLE, GSK. WORKED IN CHINA. TRANSHUMANIST. LOCKDOWN FANATIC

CORRUPTION UNLTD: GSK AND “TRUMP’S VACCINE CZAR”. SEX TAPES, DEAD BABIES, BRIBES AND PROSTITUTES

EXCLUSIVE: GATES, FAUCI AND SLAOUI HAVE LONG BEEN COOKING AND SELLING SCANDALOUS VACCINES TOGETHER. IT’S A CARTEL

IT’S NOT 5G AND COVID-19, IT’S DATA AND VACCINATIONS. US AND CHINA HAVE LONG USED WHO AS PLATFORM TO COLLABORATE ON THIS

“If you take the first Operation Warp Speed vaccine  you will get an unexpected surprise: micromanaged tracking by Big Tech for up to two years, who will know more about you than you know about yourself. There is no guarantee that tracking will stop after two years.” writes Technocracy News

” It should become apparent that the military/industrial complex that is running Warp Speed is functionally merged with Big Tech like Google and Oracle. And then, there is the federal government itself that is driving the entire vaccination program”, adds TN and they’re not wrong.

Moncef Slaoui, the official head of Operation Warp Speed, told the Wall Street Journal last week that all Warp Speed vaccine recipients in the US will be monitored by “incredibly precise . . . tracking systems” for up to two years and that tech giants Google and Oracle would be involved.

Another high from Slaoui’s career that looks more like a bloodbath.

Last week, a rare media interview given by the Trump administration’s “Vaccine Czar” offered a brief glimpse into the inner workings of the extremely secretive Operation Warp Speed (OWS), the Trump administration’s “public-private partnership” for delivering a Covid-19 vaccine to 300 million Americans by next January. What was revealed should deeply unsettle all Americans.

During an interview with the Wall Street Journal published last Friday, the “captain” of Operation Warp Speed, career Big Pharma executive Moncef Slaoui, confirmed that the millions of Americans who are set to receive the project’s Covid-19 vaccine will be monitored via “incredibly precise . . . tracking systems” that will “ensure that patients each get two doses of the same vaccine and to monitor them for adverse health effects.” Slaoui also noted that tech giants Google and Oracle have been contracted as part of this “tracking system” but did not specify their exact roles beyond helping to “collect and track vaccine data.”

The day before the Wall Street Journal interview was published, the New York Times published a separate interview with Slaoui where he referred to this “tracking system” as a “very active pharmacovigilance surveillance system.” During a previous interview with the journal Science in early September, Slaoui had referred to this system only as “a very active pharmacovigilance system” that would “make sure that when the vaccines are introduced that we’ll absolutely continue to assess their safety.” Slaoui has only recently tacked on the words “tracking” and “surveillance” to his description of this system during his relatively rare media interviews.

While Slaoui himself was short on specifics regarding this “pharmacovigilance surveillance system,” the few official documents from Operation Warp Speed that have been publicly released offer some details about what this system may look like and how long it is expected to “track” the vital signs and whereabouts of Americans who receive a Warp Speed vaccine.

This is basically what we meant by “It’s about data and vaccines” in our headline above. And 5G will follow Covid around because all this data needs carried by a medium and many antennas. Which, while doing their work, can also produce Covid-like symptoms, as a bonus benefit for the Covidiocracy orchestrators.

Stuff that no one mentions in Slaoui’s romanced biographies

The Last American Vagabond takes it from here into finer details in one of his latest posts, demonstrating we’re guinea pigs and this is how they will study us:

The Pharmacovigilantes

Two official OWS documents released in mid-September state that vaccine recipients—expected to include a majority of the US population—would be monitored for twenty-four months after the first dose of a Covid-19 vaccine is administered and that this would be done by a “pharmacovigilance system.”

In the OWS document entitled “From the Factory to the Frontlines,” the Department of Health and Human Services (HHS) and the Department of Defense (DOD) stated that, because Warp Speed vaccine candidates use new unlicensed vaccine production methods that “have limited previous data on safety in humans . . . the long-term safety of these vaccines will be carefully assessed using pharmacovigilance surveillance and Phase 4 (post-licensure) clinical trials.”

It continues:

The key objective of pharmacovigilance is to determine each vaccine’s performance in real-life scenarios, to study efficacy, and to discover any infrequent and rare side effects not identified in clinical trials. OWS will also use pharmacovigilance analytics, which serves as one of the instruments for the continuous monitoring of pharmacovigilance data. Robust analytical tools will be used to leverage large amounts of data and the benefits of using such data across the value chain, including regulatory obligations.

In addition, Moncef Slaoui and OWS’s vaccine coordinator, Matt Hepburn, formerly a program manager at the Pentagon’s controversial Defense Advanced Research Projects Agency (DARPA), had previously published an article in the New England Journal of Medicine that stated that “because some technologies have limited previous data on safety in humans, the long-term safety of these vaccines will be carefully assessed using pharmacovigilance surveillance strategies.”

The use of pharmacovigilance on those who receive the vaccine is also mentioned in the official Warp Speed “infographic,” which states that monitoring will be done in cooperation with the Food and Drug Administration (FDA) and the Centers for Disease Control and Protection (CDC) and will involve “24 month post-trial monitoring for adverse effects.”

In a separate part of that same document, OWS describes one of its “four key tenets” as “traceability,” which has three goals: to “confirm which of the approved vaccines were administered regardless of location (private/public)”; to send a “reminder to return for second dose”; and to “administer the correct second dose.”

Regarding a Covid-19 vaccine requiring more than one dose, a CDC document associated with Operation Warp Speed states:

For most Covid-19 vaccine products, two doses of vaccine, separated by 21 or 28 days, will be needed. Because different Covid-19 vaccine products will not be interchangeable, a vaccine recipient’s second dose must be from the same manufacturer as their first dose. Second-dose reminders for vaccine recipients will be critical to ensure compliance with vaccine dosing intervals and achieve optimal vaccine effectiveness.

The CDC document also references a document published in August by the Johns Hopkins Center for Health Security, associated with the Event 201 and Dark Winter simulations, as informing its Covid-19 vaccination strategy. The Johns Hopkins paper, which counts Dark Winter co-organizer Thomas Inglesby as one of its authors, argues that existing “passive reporting” systems managed by the CDC and FDA should be retooled to create “an active safety surveillance system directed by the CDC that monitors all [Covid-19] vaccine recipients—perhaps by short message service or other electronic mechanisms.”

Despite the claims in these documents that the “pharmacovigilance surveillance system” would intimately involve the FDA, top FDA officials stated in September that they were barred from attending OWS meetings and told reporters they could not explain the operation’s organization or when or with what frequency its leadership meets. The FDA officials did state, however, that they “are still allowed to interact with companies developing products for OWS,” STAT news reported.

In addition, the FDA has apparently “set up a firewall between the vast majority of staff and the initiative [Operation Warp Speed]” that appears to drastically limit the number of FDA officials with any knowledge of or involvement in Warp Speed. The FDA’s director of the Center for Drug Evaluation and Research, Janet Woodcock, is the only FDA official listed as having any direct involvement in OWS and appears to be personally managing this “firewall” at the FDA. Woodcock describes herself as a long-time advocate for the use of “big data” in the evaluation of drug and vaccine safety and has been intimately involved in FDA precursors to the coming Warp Speed “pharmacovigilance surveillance system” known as Sentinel and PRISM, both of which are discussed later in this report.

Woodcock is currently on a temporary leave of absence from her role as the director of the Center for Drug Evaluation and Research, which allows her to focus her complete attention on overseeing aspects of Operation Warp Speed on behalf of the FDA’s Office of the Commissioner. Her temporary replacement at the FDA, Patrizia Cavazzoni, is “very aligned with Janet and where the agency is going,” according to media reports. Cavazzoni is a former executive at Pfizer, one of the companies producing a vaccine for OWS. That vaccine is set to begin testing in children as young as 12 years old.

The extreme secrecy of Operation Warp Speed has affected not only the FDA but also the CDC, as a CDC expert panel normally involved in developing the government’s vaccine distribution strategies was “stonewalled” by Matt Hepburn, OWS’s vaccine coordinator, who bluntly refused to answer several of the panel’s “pointed questions” about the highly secretive operation.

More Secret Contracts

While Moncef Slaoui and Warp Speed documents provide few details regarding what this “tracking system” would entail, Slaoui did note in his recent interview with the Wall Street Journal that tech giants Google and Oracle had been contracted to “collect and track vaccine data” as part of this system. Neither Google nor Oracle, however, has announced receipt of a contract related to Operation Warp Speed, and the DOD and HHS, similarly, have yet to announce the awarding of any Warp Speed contract to either Google or Oracle. In addition, searches on the US government’s Federal Register and on the official website for federally awarded contracts came up empty for any contract awarded to Google or Oracle that would apply to any such “pharmacovigilance” system or any other aspect of Operation Warp Speed.

Given my previous reporting on the use of a nongovernment intermediary for awarding OWS contracts to vaccine companies, it seems likely that Warp Speed contracts awarded to Google and Oracle were made using a similar mechanism. In an October 6, 2020, report for The Last American Vagabond, I noted that $6 billion in Warp Speed contracts awarded to vaccine companies were made through Advanced Technology International (ATI), a government contractor that works mainly with the military and surveillance technology companies and whose parent company has strong ties to the CIA and the 2001 Dark Winter simulation. HHS, which is supposedly overseeing Operation Warp Speed, claimed to have “no record” of at least one of those contracts. Only one Warp Speed vaccine contract, which did not involve ATI and was awarded directly by HHS’s Biomedical Advanced Research and Development Authority, was recently obtained by KEI Online. Major parts of the contract, however, including the section on intellectual property rights, were redacted in their entirety.

If the Warp Speed contracts that have been awarded to Google and Oracle are anything like the Warp Speed contracts awarded to most of its participating vaccine companies, then those contracts grant those companies diminished federal oversight and exemptions from federal laws and regulations designed to protect taxpayer interests in the pursuit of the work stipulated in the contract. It also makes them essentially immune to Freedom of Information Act (FOIA) requests. Yet, in contrast to the unacknowledged Google and Oracle contracts, vaccine companies have publicly disclosed that they received OWS contracts, just not the terms or details of those contracts. This suggests that the Google and Oracle contracts are even more secretive.

A major conflict of interest worth noting is Google’s ownership of YouTube, which recently banned on its massive multimedia platform all “misinformation” related to concerns about a future Covid-19 vaccine. With Google now formally part of Operation Warp Speed, it seems likely that any concerns about OWS’s extreme secrecy and the conflicts of interest of many of its members (particularly Moncef Slaoui and Matt Hepburn) as well as any concerns about Warp Speed vaccine safety, allocation and/or distribution may be labeled “Covid-19 vaccine misinformation” and removed from YouTube.

From the NSA to the FDA: The New PRISM

Though the nature of this coming surveillance system for Covid-19 vaccine recipients has yet to be fully detailed by Warp Speed or the tech companies the operation has contracted, OWS documents and existing infrastructure at the FDA offer a clue as to what this system could entail.

For instance, the Warp Speed document “From the Factory to the Frontlines” notes that the pharmacovigilance system will be a new system created exclusively for OWS that will be “buil[t] off of existing IT [information technology] infrastructure” and will fill any “gaps with new IT solutions.” It then notes that “the Covid-19 vaccination program requires significant enhancement of the IT that will support enhancements and data exchange that are critical for a multi-dose candidate to ensure proper administration of a potential second dose.” The document also states that all data related to the OWS vaccine distribution effort “will be reported into a common IT infrastructure that will support analysis and reporting,” adding that this “IT infrastructure will support partners with a broad range of tools for record-keeping, data on who is being vaccinated, and reminders for second doses.”

Though some Warp Speed documents hint as to the existing IT systems that will serve as the foundation for this new tracking system, arguably the most likely candidate is the FDA-managed Sentinel Initiative, which was established in 2009 during the H1N1 Swine flu pandemic. Like Operation Warp Speed itself, Sentinel is a public-private partnership and involves the FDA, private business, and academia.

According to its website, Sentinel’s “main goal is to improve how FDA evaluates the safety and performance of medical products” through big data, with an additional focus on “learning more about potential side effects.” Media reports describe Sentinel as “an electronic surveillance system that aggregates data from electronic medical records, claims and registries that voluntarily participate and allows the agency to track the safety of marketed drugs, biologics and medical devices.”

One of Sentinel’s main proponent at the FDA is Janet Woodcock, who has aggressively worked to expand the program as director of the FDA’s Center for Drug Evaluation and Research, with a focus on Sentinel’s use in “post-market effectiveness studies.” As previously mentioned, Woodcock is the only FDA official listed among the ninety or so “leaders” of OWS, most of whom are part of the US military and lack any health-care or vaccine-production experience.

Woodcock’s temporary replacement at the FDA, Patrizia Cavazzoni, is also very active in efforts to expand Sentinel. STAT news reported earlier this year that Cavazzoni previously “served on the sterling committee of I-MEDS, an FDA-industry partnership which allows drug makers to pay for use of the FDA’s real-world data system known as Sentinel to complete certain safety studies more quickly.”

Sentinel has a series of “collaborating partners” that “provide healthcare data and scientific, technical, and organizational expertise” to the initiative. These collaborating partners include intelligence contractor Booz Allen Hamilton, tech giant IBM, and major US health insurance companies such as Aetna and Blue Cross Blue Shield, among many others. In addition, Sentinel’s Innovation Center, which it describes as the program’s “test bed to identify, develop, and evaluate innovative methods,” is partnered with Amazon, General Dynamics, and Microsoft. Sentinel also has a Community Building and Outreach Center, which is managed by Deloitte consulting, one of the largest consultancy firms in the world that is known for seeking to fill its ranks with former CIA officials.

The Sentinel system’s specific surveillance program aimed at monitoring vaccine effectiveness is known as the Post-licensure Rapid Immunization Safety Monitoring Program, better known as PRISM. Sentinel’s PRISM was “developed to monitor vaccine safety, but [to date] has never been used to assess vaccine effectiveness.” PRISM was initially launched alongside the Sentinel Initiative itself in 2009 “in response to the need to monitor the safety of the H1N1 influenza vaccine” after it was licensed, marketed, and administered. Yet, as previously mentioned, PRISM has yet to be used to assess the effectiveness of any vaccine while quietly expanding for nearly a decade, which implies that the stakeholders in the Sentinel Initiative have a plan to implement this “safety surveillance system” at some point.

The name PRISM may remind readers of the National Security Agency (NSA) program of the same name that became well known throughout the United States following the Edward Snowden revelations. Given this association, it is worth noting that the NSA, as well as the Department of Homeland Security (DHS), are now officially part of Operation Warp Speed and appear to be playing a role in the development of Warp Speed’s “pharmacovigilance surveillance system.” The addition of the NSA and the DHS to the initiative, of course, greatly increases the involvement of US intelligence agencies in the operation, which itself is “dominated” by the military and sorely lacking in civilian public health officials.

CyberScoop first reported in early September that members of the NSA’s Cybersecurity Directorate were involved in Operation Warp Speed, with their role—as well as that of DHS—being framed mainly as offering “cybersecurity advice” to the initiative. However, the NSA and DHS are also offering “guidance” and “services” to both the other federal agencies involved in Warp Speed as well as OWS contractors, which now include Google and Oracle.

Google is well known for its cozy relationship with the NSA, including its PRISM program, and they have also backed NSA-supported legislation that would make it easier to surveil Americans without a warrant. Similarly, Oracle is a longtime NSA contractor and also has ties to the CIA dating back to its earliest days as a company, not unlike Google. Notably, Oracle and Google remain locked in a major legal battle over copyright issues that is set to be heard by the Supreme Court in the coming weeks and is expected to have major ramifications for the tech industry.


We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

Many times, to find out who caused a problem you need to look who’s selling the solution.

Justin Trudeau openly states Covidiocracy is all about #TheGreatReset

Remember “Event 201“? It was them, the World Economic Forum (WEF), alongside the World Bank, and the Bill and Melinda Gates Foundation, mainly.
Remember “The Great Reset“? Pretty much same combo.
And now they have “The World Economic Forum COVID Action Platform”, their Covidiocracy propaganda website, where they “care”.

Davos plays host to the World Economic Forum (WEF), an annual meeting of global political and business elites (often referred to simply as “Davos”), and has one of Switzerland’s biggest ski resorts.

Officially, the WEF is a Swiss non-profit foundation, set up in 1971 to “improve the state of the world by engaging business, political, academic, and other leaders of society to shape global, regional, and industry agendas”.
Right.

Best WEF portrait I’ve read comes from UK analyst Steven Guiness, here’s a consistent chunk advancing my point:

“Event 201 consisted of fifteen ‘players‘ that represented, amongst others, airlines and medical corporations. Out of these fifteen, six are direct partners of the World Economic Forum. One is the Bill and Melinda Gates Foundation, with the other five being Marriott International (hospitality), Henry Schein (medical distribution), Edelman (communications), NBCUniversal Media and Johnson & Johnson.

To be clear, these organisations do not all operate at the same level within the WEF. For instance, the Bill and Melinda Gates Foundation and Johnson and Johnson are ‘Strategic Partners‘, the highest stage for a participant. Only 100 global companies are Strategic Partners, and to qualify for an invitation they must all have ‘alignment with forum values‘. Not only that, but Strategic Partners ‘shape the future through extensive contribution to developing and implementing Forum projects and championing public-private dialogue.’

Next come the ‘Partners‘ which comprise of Marriott International, Henry Schein and Edelman. Partners are described by the WEF as ‘world class companies‘ who possess a ‘strong interest in developing systemic solutions to key challenges‘.

Beneath the Strategic Partners are the ‘Strategic Partner Associates‘, which is the category that NBCUniversal Media fall under. Strategic Partner Associates include some of the largest businesses in the world, who are ‘actively involved in shaping the future of industries, regions and systemic issues‘. According to the WEF, associates also believe in ‘corporate global citizenship‘.

Finally, there are the ‘Associate Partners‘. Whilst they participate in ‘forum communities‘ and have a ‘strong interest in addressing challenges affecting operations and society at large‘, none were present at Event 201.

Every major industry in the world, be it banking, agriculture, healthcare, media, retail, travel and tourism, is directly connected to the World Economic Forum through corporate membership.

What is evident is that the deeper a corporation’s ties with the WEF, the greater its ability to ‘shape‘ the group’s agenda. Which brings us to what the WEF call their Strategic Intelligence platform – the mechanism which brings all the interests that the WEF concentrate on together.

They describe the platform as ‘a dynamic system of contextual intelligence that enables users to trace relationships and interdependencies between issues, supporting more informed decision-making‘.

As for why the WEF developed Strategic Intelligence, they say it was to ‘help you (businesses) understand the global forces at play and make more informed decisions‘.

Growing the platform is an ever present goal. The WEF are always looking for new members to become part of Strategic Intelligence by joining the ‘New Champions Community‘. But they will only allow a new organisation on board if they ‘align with the values and aspirations of the World Economic Forum in general‘. A 12 month ‘New Champions Membership‘ comes in at €24,000.

In arguing for the relevance of Strategic Intelligence, the WEF ask:

How can you decipher the potential impact of rapidly unfolding changes when you’re flooded with information—some of it misleading or unreliable? How do you continuously adapt your vision and strategy within a fast-evolving global context?

In other words, Strategic Intelligence is both an antidote to ‘fake news‘ and an assembly for corporations to position themselves as global pioneers in a rapidly changing political and technological environment. That’s the image they attempt to convey at least.

We can find more involvement from global institutions via Strategic Intelligence. The platform is ‘co-curated with leading topic experts from academia, think tanks, and international organizations‘.

Co-curators‘ are perhaps the most important aspect to consider here, given that they have the ability to ‘share their expertise with the Forum’s extensive network of members, partners and constituents, as well as a growing public audience‘.

It is safe to assume then that when co-curators speak, members and partners of the World Economic Forum listen. This in part is how the WEF’s agenda takes shape.

Who are the co-curators? At present, they include Harvard university, the Massachusetts Institute of Technology, Imperial College London, Oxford University, Yale and the European Council on Foreign Relations.

It was the Massachusetts Institute of Technology that in March published an article titled, ‘We’re not going back to normal‘, just as Covid-19 lockdowns were being implemented world wide. Citing a report by fellow co-curator Imperial College London that endorsed the imposition of tougher social distancing measures if hospital admissions begin to spike, MIT proclaimed that ‘social distancing is here to stay for much more than a few weeks. It will upend our way of life, in some ways forever.’

As well as co-curators there are what’s known as ‘Content Partners‘, who the WEF say are ‘amplified by machine analysis of more than 1,000 articles per day from carefully selected global think tanks, research institutes and publishers‘.

Content partners include Harvard university, Cambridge university, the Rand Corporation, Chatham House (aka the Royal Institute of International Affairs), the European Council on Foreign Relations and the Brookings Institute.

Getting into specifics, the way Strategic Intelligence is structured means that the higher your position in the corporate fold, the more ‘platforms‘ you can be part of. Whereas Strategic Partners must be part of a minimum of five platforms, Associate Partners only have access to a single platform of their choice.

Here is a list of some of the platforms hosted by the World Economic Forum:

  • COVID Action Platform
  • Shaping the Future of Technology Governance: Blockchain and Distributed Ledger Technologies
  • Shaping the Future of the New Economy and Society
  • Shaping the Future of Consumption
  • Shaping the Future of Digital Economy and New Value Creation
  • Shaping the Future of Financial and Monetary Systems
  • Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning
  • Shaping the Future of Trade and Global Economic Interdependence
  • Shaping the Future of Cities, Infrastructure and Urban Services
  • Shaping the Future of Energy and Materials
  • Shaping the Future of Media, Entertainment and Culture

As we will look at in a follow up article, ‘The Great Reset‘ is made up of over 50 areas of interest that are formed of both ‘Global Issues‘ and ‘Industries‘, which in turn are all part of the WEF’s Strategic Intelligence platform.

Corporate membership is essential for the World Economic Forum to spread its influence, but in the end every single member is in compliance with the agenda, objectives, projects and values of the WEF. These take precedent over all else.

Also in concurrence with the WEF are the organisation’s Board of Trustees. Three of these include the current Managing Director of the IMF, Kristalina Georgieva, European Central Bank President Christine Lagarde and former Bank of England governor Mark Carney. The Trilateral Commission are also represented amongst the trustees through Larry Fink and David Rubenstein.

To add some historical context to the WEF, the group dates back to 1971 when it was originally founded as the European Management Forum. At the time the conflict in Vietnam was raging, social protest movements were building and the United States was about to relinquish the gold standard. By 1973 when the post World War Two Bretton Woods system collapsed and the Trilateral Commission was formed, the Forum had widened its interest beyond just management to include economic and social issues. From here onwards political leaders from around the world began to receive invitations to the institution’s annual meeting in Davos.

The World Economic Forum is classified today as the ‘International Organisation for Public-Private Cooperation‘, and is the only global institution recognised as such. It is in this capacity that the forum ‘engages the foremost political, business, cultural and other leaders of society to shape global, regional and industry agendas.’

Like how the Bank for International Settlements acts as a forum to bring central banks together under one umbrella, the WEF plays the same role by uniting business, government and civil society.

The WEF declare themselves as being a ‘catalyst for global initiatives‘, which is accurate considering ‘The Great Reset‘ agenda originates at the WEF level. And it is initiatives like ‘The Great Reset‘ and the ‘Fourth Industrial Revolution‘ which the WEF say are distinguished by ‘the active participation of government, business and civil society figures‘.

The Fourth Industrial Revolution (4IR) narrative was developed out of the World Economic Forum back in 2016. The WEF have confidently asserted that because of 4IR, ‘over the next decade, we will witness changes tearing through the global economy with an unprecedented speed, scale and force. They will transform entire systems of production, distribution and consumption‘.

Not only that, but the world is on the verge of witnessing ‘more technological change over the next decade than we have seen in the past 50 years.’

The group now plan to use ‘The Great Reset‘ as their theme for the 2021 annual meeting in Davos as a vehicle for advancing the 4IR agenda. 4IR is marketed as a technological revolution, where advancement in all the sciences ‘will leave no aspect of global society untouched.’ “
Read even more on WEF from Steven Guiness, whose blog should be in everyone’s bookmarks.

Now let’s hear from Forbes’ pre-review of 2020’s Davos meeting, held in January:

“As world leaders descend on Davos in their private jets and chartered helicopters every January for the World Economic Forum (WEF), the global charity Oxfam likes to remind them about the state of inequality.

Their research, which builds on Forbes‘ billionaires list among other sources, shows how the richest 2,000 people hold more wealth than poorest 4.6 billion combined.

The irony is not lost at the WEF, where the guest list gets richer every year. In 2018, 12 billionaires took to the stage at the annual event in Davos. This week there are 119 billionaires in attendance according to Bloomberg. Collectively they are worth around $500 billion.

But the disparities do not end there. Here are four other statistics which show how out of touch the World Economic Forum is becoming.

Davos Billionaires Worth Nearly Half Of All Women In Africa

Oxfam’s original finding was that the 22 richest men in the world have more wealth than all of the women in Africa.Recommended For You

That’s around $1.2 trillion. Or, to put it another way, just over double the collective worth of the 119 billionaires at Davos this year.

SWITZERLAND-DAVOS-ECONOMY-MEET
Chairman Axel A. Weber (R) listens to JP Morgan Chase chief executive officer Jamie Dimon at the … [+] 2013 AFP

Over Half Think Capitalism Does More Harm Than Good

It is against this stark backdrop that public relations firm Edelman surveyed over 34,000 people. Just over half (56%) thought that capitalism was doing more harm than good. “We are living in a trust paradox,” says Richard Edelman, the CEO of Edelman.

On WEF’s agenda this year is a “better kind of capitalism,” but still many remain to be convinced the summit does not actually erode trust.

U.K. Prime Minister Boris Johnson has banned British government ministers from attending the WEF this year, for fear of the image it brings. A government source told the Telegraph in December, “Our focus is on delivering for the people, not champagne with billionaires.”

In 2016, Johnson described the summit as “a struggle between people who want to take back control, and a small group of people who do very well out of the current system and who know Christine Lagarde.”

India Is The 7th Most Unequal Country

This is the WEF’s own research, which shows India ranks as the 7th lowest country in the world in terms of equal opportunity.

Whether or not the organisers saw the irony in hosting 19 Indian billionaires (the second largest contingent of billionaires after the U.S.) is unknown. But it might be hoped that amassing them all on a Swiss mountainside will sort out some of India’s inequality issues.

Another report published by the Forum this week said that global inequality is going to worsen as a result of rapid technological change unless governments and business leaders do something about it.

Klaus Schwab, the founder and executive chairman of the WEF said at the opening of its 50th session last week he wanted the summit to be more of a “do-shop not a talk-shop.” – Forbes

I know I’m repeating myself, but I don’t do it nearly enough:
If you want the map of the near future, consult The Great Reset, Event 201 and everything Covid-related from WEF/World Bank/IMF, and less their lemmings like WHO and Bill Gates.

These are the people who delivered this astounding article from April 2020, showing how much pre-science they had over the damage they cause to this world. Most of his science was available (at least to their specialists) anytime before the insane Covid response from our governance (most of the data and analysis is not based on new reliable information, it was too early); and yet they went ahead with the collapse. The ongoning genocide is not a collateral effect, or an error, they prove awareness of the consequences, so decimating our lives was the plan all along.
Below you have WEF’s implicite “confession” integrally.

Lockdown is the world’s biggest psychological experiment – and we will pay the price 

09 Apr 2020

By Dr Elke Van Hoof, Professor, health psychology and primary care psychology, Vrije Universiteit Brussel

  • With some 2.6 billion people around the world in some kind of lockdown, we are conducting arguably the largest psychological experiment ever;
  • This will result in a secondary epidemic of burnouts and stress-related absenteeism in the latter half of 2020;
  • Taking action now can mitigate the toxic effects of COVID-19 lockdowns.

In the mid-1990s, France was one of the first countries in the world to adopt a revolutionary approach for the aftermath of terrorist attacks and disasters. In addition to a medical field hospital or triage post, the French crisis response includes setting up a psychological field unit, a Cellule d’Urgence Médico-Psychologique or CUMPS.

Have you read?

In that second triage post, victims and witnesses who were not physically harmed receive psychological help and are checked for signs of needing further post-traumatic treatment. In those situations, the World Health Organization recommends protocols like R-TEP (Recent Traumatic Episode Protocol) and G-TEP (Group Traumatic Episode Protocol).

Since France led the way more than 20 years ago, international playbooks for disaster response increasingly call for this two-tent approach: one for the wounded and one to treat the invisible, psychological wounds of trauma.

In treating the COVID-19 pandemic, the world is scrambling to build enough tents to treat those infected with a deadly, highly contagious virus. In New York, we see literal field hospitals in the middle of Central Park.

But we’re not setting up the second tent for psychological help and we will pay the price within three to six months after the end of this unprecedented lockdown, at a time when we will need all able bodies to help the world economy recover.

The mental toll of quarantine and lockdown

Currently, an estimated 2.6 billion people – one-third of the world’s population – is living under some kind of lockdown or quarantine. This is arguably the largest psychological experiment ever conducted.

Estimated size of lockdowns around the world
Estimated size of lockdowns around the worldImage: Statista

Unfortunately, we already have a good idea of its results. In late February 2020, right before European countries mandated various forms of lockdowns, The Lancet published a review of 24 studies documenting the psychological impact of quarantine (the “restriction of movement of people who have potentially been exposed to a contagious disease”). The findings offer a glimpse of what is brewing in hundreds of millions of households around the world.

In short, and perhaps unsurprisingly, people who are quarantined are very likely to develop a wide range of symptoms of psychological stress and disorder, including low mood, insomnia, stress, anxiety, anger, irritability, emotional exhaustion, depression and post-traumatic stress symptoms. Low mood and irritability specifically stand out as being very common, the study notes.

In China, these expected mental health effects are already being reported in the first research papers about the lockdown.

In cases where parents were quarantined with children, the mental health toll became even steeper. In one study, no less than 28% of quarantined parents warranted a diagnosis of “trauma-related mental health disorder”.

Among quarantined hospital staff, almost 10% reported “high depressive symptoms” up to three years after being quarantined. Another study reporting on the long-term effects of SARS quarantine among healthcare workers found a long-term risk for alcohol abuse, self-medication and long-lasting “avoidance” behaviour. This means that years after being quarantined, some hospital workers still avoid being in close contact with patients by simply not showing up for work.

Reasons for stress abound in lockdown: there is risk of infection, fear of becoming sick or of losing loved ones, as well as the prospect of financial hardship. All these, and many more, are present in this current pandemic.

The second epidemic and setting up the second tent online

Source

We can already see a sharp increase in absenteeism in countries in lockdown. People are afraid to catch COVID-19 on the work floor and avoid work. We will see a second wave of this in three to six months. Just when we need all able bodies to repair the economy, we can expect a sharp spike in absenteeism and burnout.

We know this from many examples, ranging from absenteeism in military units after deployment in risk areas, companies that were close to Ground Zero in 9/11 and medical professionals in regions with outbreaks of Ebola, SARS and MERS.

Right before the lockdown, we conducted a benchmark survey among a representative sample of the Belgian population. In that survey, we saw that 32% of the population could be classified as highly resilient (“green”). Only 15% of the population indicated toxic levels of stress (“red”).

How stress under lockdown is affecting Belgians
How stress under lockdown is affecting Belgians

In our most recent survey after two weeks of lockdown, the green portion has shrunk to 25% of the population. The “red” part of the population has increased by 10 percentage points to fully 25% of the population.

These are the people at high risk for long-term absenteeism from work due to illness and burnout. Even if they stay at work, research from Eurofound reports a loss of productivity of 35% for these workers.

In general, we know at-risk groups for long-term mental health issues will be the healthcare workers who are on the frontline, young people under 30 and children, the elderly and those in precarious situations, for example, owing to mental illness, disability and poverty.

All this should surprise no one; insights on the long-term damage of disasters have been accepted in the field of trauma psychology for decades.

The phases of disaster response
The phases of disaster responseImage: When disaster strikes, Beverly Raphael, 1986

But while the insights are not new, the sheer scale of these lockdowns is. This time, ground zero is not a quarantined village or town or region; a third of the global population is dealing with these intense stressors. We need to act now to mitigate the toxic effects of this lockdown.

What governments and NGOs can and should do today

There is broad consensus among academics about the psychological care following disasters and major incidents. Here are a few rules of thumb:

  • Make sure self-help interventions are in place that can address the needs of large affected populations;
  • Educate people about the expected psychological impact and reactions to trauma if they are interested in receiving it. Make sure people understand that a psychological reaction is normal;
  • Launch a specific website to address psychosocial issues;
  • Make sure that people with acute issues can find the help that they need

In Belgium, we recently launched Everyone OK, an online tool that tries to offer help to the affected population. Using existing protocols and interventions, we launched our digital self-help tool in as little as two weeks.

When it comes to offering psychological support to their populations, most countries are late to react, as they were to the novel coronavirus. Better late than never.


We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

BY Shanti Das for The Sunday Times

Companies collecting data for pubs and restaurants to help them fulfil their contact-tracing duties are harvesting confidential customer information to sell.

Legal experts have warned of a “privacy crisis” caused by a rise in companies exploiting QR barcodes to take names, addresses, telephone numbers and email details, before passing them on to marketers, credit companies and insurance brokers.

The “quick response” mobile codes have been widely adopted by the hospitality, leisure and beauty industries as an alternative to pen-and-paper visitor logs since the government ordered businesses to collect contact details to give to NHS Test and Trace if required.

Any data collected should be kept by the business for 21 days and must not be used “for any purposes other than for NHS Test and Trace”, according to government guidelines.

But some firms used by businesses to meet the new requirements have clauses in their terms and conditions stating they can use the information for reasons other than contact tracing, including sharing it with third parties. The privacy policy of one company used by a restaurant chain in London says it stores users’ data for 25 years.

Gaurav Malhotra, director of Level 5, a software development company that supplies the government, said data could end up in the hands of scammers. “If you’re suddenly getting loads of texts, your data has probably been sold on from track-and-trace systems,” he said.

One of the firms claiming to offer a privacy-compliant QR code service is Pub Track and Trace (PUBTT), an organisation based in Huddersfield charging pubs £20 a month to keep track of visitors, who are asked to provide their name, phone number and email address.

Despite its claim to be a “simple” service, its privacy policy, which users must accept, explains how personal data of people accessing its website can be used to “make suggestions and recommendations to you about goods or services that may be of interest to you” and shared with third parties including “service providers or regulatory bodies providing fraud prevention services or credit/background checks.”

It may also “collect, use, store and transfer” records of access to certain premises including “time, ID number and CCTV images”.

PUBTT, which works with pubs in England and Wales, said users agreed to its privacy policy before using the service and claimed it had not passed data to third parties. A spokesman, identified only as Adam H, said: “The data we collect is only for use of the Test and Trace service or where a user has agreed for the venue to use their information for marketing purposes.”

Ordamo, which provides track and trace services for restaurants, states that data from website visitors is “retained for 25 years”, a duration Hazel Grant, head of privacy at Fieldfisher, a law firm, said would be “very difficult to justify”. Ordamo did not respond to requests for comment.

The Information Commissioner’s Office is assessing 15 companies that “provide services to venues to collect customer logs”.


We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

Just an idea and some memes

#STOPSTEALINGOXYGEN
#STOPSTEALINGOXYGEN

SHARE THE MEMES

We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

This just happened. And much more. As we’ve warned you since March, but people thought WHO can take better care of them. OK then…

It’s World Mental Health Day!
-Close to 1 billion people have a mental disorder
-Depression is a leading cause of illness & disability
-1 person dies every 40 seconds from suicide
-3 million people die every year due to harmful use of 🍻#MoveForMentalHealth: Let’s invest!

WHO

Meanwhile at CDC:

How did we end up here:

Me, March 2020:

The caring people: meh

Everyone in April:

Source

The caring people: meh

And so forth gradually building up until The Daily Telegraph and Sky News Australia ended up talking about “harrowing statistics” today:

Source

“Very sadly, more boys under the age of 18 in nine-months alone, than we’ve ever seen in Victoria over a full 12-month period have taken their life this year,”

Sky News

Per coincidence as ever, the suicide rates among Victoria’s teenagers are up over 30% this year, just like among US Army soldiers. I wonder what they had in common, right?

Army active-duty suicides are up 30% during the same time frame as COVID-19.

ABC News, October 2020

The caring people: meh

These “meh people” are the same ones who loudly and aggressively act as if they are entitled to free heaith care (mask-wearing) from their victims. How about some warm flegm instead?


We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

That is a fact. And it begs some questions.

THE FACTS:

The Universal Coronavirus Vaccine Patent.
DOWNLOAD PDF
2020 be like:
Source

THE QUESTIONS to anyone who believes in vaccines

  • Why does everyone act like it’s never happened?
  • If anyone claims it’s not good, how could’ve they known that in Februaty or in March, when the virus was “novel” and “the data was scarce”?
  • Can we expect to find more examples of either drug patents that don’t work or hidden medical and science advancements, whatever the case might be here?
Source

We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them

by Silviu “Silview” Costinescu_ Buy Me a Coffee at ko-fi.com

It’s not disputable, since the information comes from official patent registries in the Netherlands and US. And we have all the documentation

UPDATE: Reuters took on doing damage control for this article and published a slander and smear piece on us disguised as “fact-checking”.
We fact-checked their fact-checking phrase by phrase here.

As we’ve shown in previous exposes, the whole Covidiocracy is a masquerade and a simulation long prepared by The World Bank / IMF / The Rothschilds and their lemmings, with Rockefeller partnership.
Our newest discoveries further these previous revelations.

first registration: netherlands, 2015

Source: Dutch Government patent regitry website

Info (verbatim copy):

A method is provided for acquiring and transmitting biometric data (e.g., vital signs) of a user, where the data is analyzed to determine whether the user is suffering from a viral infection, such as COVID-19. The method includes using a pulse oximeter to acquire at least pulse and blood oxygen saturation percentage, which is transmitted wirelessly to a smartphone. To ensure that the data is accurate, an accelerometer within the smartphone is used to measure movement of the smartphone and/or the user. Once accurate data is acquired, it is uploaded to the cloud (or host), where the data is used (alone or together with other vital signs) to determine whether the user is suffering from (or likely to suffer from) a viral infection, such as COVID-19. Depending on the specific requirements, the data, changes thereto, and/or the determination can be used to alert medical staff and take corresponding actions.

second registration: us, 2017

Detailed info below.

DOWNLOAD FROM GOOGLE PATENTS (PDF)

ONE KEY DETAIL STRUCK ME ON THESE REGISTRATIONS:
Both were filed and updated years ago, but they were SCHEDULED to be made public in September 2020.

This is sufficient evidence that they knew in 2015 what’s going to happen in September 2020!

THIRD REGISTRATION: US, 2017 (ACTUALIZATION FROM 2015)

Source

Before we present the patent technical details, let’s contemplate inventor’s Facebook for a moment or two:

Notice anything?

Patent Info (verbatim copy):

Title: System and Method for Using, Biometric, and Displaying Biometric Data United States Patent Application 20170229149 Kind Code: A1

Abstract: A method is provided for processing and displaying biometric data of a user, either alone or together (in synchronization) with other data, such as video data of the user during a time that the biometric data was acquired. The method includes storing biometric data so that it is linked to an identifier and at least one time-stamp (e.g., a start time, a sample rate, etc.), and storing video data so that it is linked to the identifier and at least one time-stamp (e.g., a start time). By storing data in this fashion, biometric data can be displayed (either in real-time or delayed) in synchronization with video data, and biometric data can be searched to identify at least one biometric event. Video corresponding to the biometric event can then be displayed, either alone or together with at least one biometric of the user during the biometric event.


Inventors: Rothschild, Richard A. (London, GB)
Macklin, Dan (Stafford, GB)
Slomkowski, Robin S. (Eugene, OR, US)
Harnischfeger, Taska (Eugene, OR, US)
Application Number: 15/495485
Publication Date: 08/10/2017
Filing Date: 04/24/2017 Export Citation: Click for automatic bibliography generation
Assignee:
Rothschild Richard A.
Macklin Dan
Slomkowski Robin S.
Harnischfeger Taska
International Classes: G11B27/10; G06F19/00; G06K9/00; G11B27/031; H04N5/77
View Patent Images: Download PDF 20170229149  

US Patent References:

20160035143N/A2016-02-04
20140316713N/A2014-10-23
20140214568N/A2014-07-31
20090051487N/A2009-02-26
20070189246N/A2007-08-16

Primary Examiner: MESA, JOSE M Attorney, Agent or Firm: Fitzsimmons IP Law (Gardena, CA, US)
Claims: What is claimed is:

1. A method for identifying video corresponding to a biometric event of a user, said video being displayed along with at least one biometric of said user during said biometric event, comprising: receiving a request to start a session; using at least one program running on a mobile device to assign a session number and a start time to said session; receiving video data from a camera, said video data including video of at least one of said user and said user’s surroundings during a period of time, said period of time starting at said start time; receiving biometric data from a sensor, said biometric data including a plurality of values on a biometric of said user during said period of time; using said at least one program to link at least said session number and said start time to said video data; using said at least one program to link at least said session number, said start time, and a sample rate to said biometric data, at least said session number being used to link said biometric data to said video data, and at least said sample rate and said start time being used to link individual ones of said plurality of values to individual times within said period of time; receiving said biometric event, said biometric event comprising one of a value and a range of said biometric; using said at least one program to identify a first one of said plurality of values corresponding to said biometric event; using said at least one program and at least said start time, said sample rate, and said period of time to identify a first time within said period of time corresponding to said first one of said plurality of values; and displaying on said mobile device at least said video data during said first time along with said first one of said plurality of values, wherein said first time is used to show said first one of said plurality of values in synchronization with a portion of said video data that shows at least one of said user and said user’s surroundings during said biometric event.

2. The method of claim 1, wherein said step of receiving biometric data from said sensor further comprises receiving heart rate data from a heart rate monitor.

3. The method of claim 1, wherein said steps of linking said session number to said video data and said biometric data further comprises linking an activity number to both said video data and said biometric data, wherein said activity number identifies one of a plurality of activities, said session comprises said plurality of activities, and both said session number and said activity number are used to link said biometric data to said video data.

4. The method of claim 1, wherein said step of assigning a session number to said session further comprises linking a description of said session to said session.

5. The method of claim 1, wherein said steps of receiving video data and biometric data further comprises receiving said video data and said biometric data during said period of time.

6. The method of claim 1, wherein said step of receiving video data from a camera further comprises receiving said video data from said camera after said period of time.

7. The method of claim 6, further comprising the step of analyzing said video data for an identifier identifying said session, said identifier being used by said at least one program to link said session number to said video data.

8. The method of claim 1, wherein said steps of identifying a first one of said plurality of values corresponding to said biometric event and identifying a first time corresponding to said first one of said plurality of values further comprises identifying each one of said plurality of values corresponding to said biometric event and identifying each time corresponding to said each one of said plurality of values.

9. The method of claim 8, wherein said step of displaying at least said video data during said first time further comprises displaying at least said video data during said each time corresponding to said each one of said plurality of values, wherein said each time is used to show said each one of said plurality of values in synchronization with portions of said video data that show at least one of said user and said user’s surroundings during said biometric event.

10. The method of claim 1, further comprising the steps of receiving self-realization data from said user, and linking at least said session number and at least one time to said self-realization data, wherein said self-realization data indicates how said user feels during said at least one time, and said at least one time is used to display said self-realization data in synchronization with at least one portion of said video data.

11. A system for identifying video corresponding to a biometric event of a user, said video being displayed along with at least one biometric of said user during said biometric event, comprising: at least one server in communication with a wide area network (WAN); a mobile device in communication with said at least one server via said WAN, said mobile device comprising: a display; at least one processor for downloading machine readable instructions from said at least one server; and at least one memory device for storing said machine readable instructions, said machine readable instructions being adapted to perform the steps of: receiving a request to start a session; assigning a session number and a start time to said session; receiving video data from a camera, said video data including video of at least one of said user and said user’s surroundings during a period of time; receiving biometric data from a sensor, said biometric data including a plurality of values on a biometric of said user during said period of time; linking at least said session number and said start time to said video data; linking at least said session number, said start time, and a sample rate to said biometric data, at least said session number being used to link said biometric data to said video data, and at least said sample rate and said start time being used to link individual ones of said plurality of values to individual times within said period of time; receiving said biometric event, said biometric event comprising one of a value and a range of said biometric; identifying a first one of said plurality of values corresponding to said biometric event; identifying a first time within said period of time corresponding to said first one of said plurality of values; and displaying on said display at least said video data during said first time along with said first one of said plurality of values, wherein said first time is used to show said first one of said plurality of values in synchronization with a portion of said video data that shows at least one of said user and said user’s surroundings during said biometric event.

12. The system of claim 11, wherein said step of receiving biometric data from said sensor further comprises receiving heart rate data from a heart rate monitor.

13. The system of claim 11, wherein said steps of linking said session number to said video data and said biometric data further comprises linking an activity number to both said video data and said biometric data, wherein said activity number identifies one of a plurality of activities, said session comprises said plurality of activities, and both said session number and said activity number are used to link said biometric data to said video data.

14. The system of claim 11, wherein said steps of receiving video data and biometric data further comprises receiving said video data and said biometric data during said period of time.

15. The system of claim 11, wherein said step of receiving video data from a camera further comprises receiving said video data from said camera after said period of time.

16. The system of claim 15, wherein said machine readable instructions are further adapted to perform the step of analyzing said video data for a barcode, said barcode identifying said session number and being used to link said session number to said video data.

17. The system of claim 11, wherein said steps of identifying a first one of said plurality of values corresponding to said biometric even and identifying a first time corresponding to said first one of said plurality of values further comprises identifying each one of said plurality of values corresponding to said biometric event and identifying each time corresponding to said each one of said plurality of values.

18. The system of claim 17, wherein said step of displaying at least said video data during said first time further comprises displaying at least said video data during said each time corresponding to said each one of said plurality of values, wherein said each time is used to show said each one of said plurality of values in synchronization with portions of said video data that show at least one of said user and said user’s surroundings during said biometric event.

19. The system of claim 11, wherein said machine readable instructions are further adapted to perform the steps of receiving self-realization data from said user, and linking said session number and at least one time to said self-realization data, wherein said self-realization data indicates how said user feels during said at least one time, and said at least one time is used to display said self-realization data in synchronization with at least one portion of said video data.

20. A method for displaying video in synchronization with at least one biometric of a subject, comprising: using at least one program running on a computing device to assign a session number and a start time to said session; receiving video data from at least one camera, said video data including video of at least one of said subject and said subject’s surroundings during a period of time; receiving biometric data from at least one sensor, said biometric data including a plurality of values on at least one biometric of said subject during said period of time; using said at least one program to link at least said session number and said start time to said video data; using said at least one program to link at least said session number, said start time, and at least one sample rate to said biometric data; receiving a biometric event, said biometric event comprising one of a value and a range of said at least one biometric; using said at least one program to identify individual ones of said plurality of values corresponding to said biometric event; using said at least one program and at least said start time, said at least one sample rate, and said period of time to identify individual times within said period of time corresponding to said individual ones of said plurality of values; and displaying on said computing device at least said video data and said individual ones of said plurality of values, wherein said individual times are used to show said individual ones of said plurality of values in synchronization with portions of said video data that show at least one of said subject and said subject’s surroundings during said biometric event.

Description:

CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of Ser. No. 15/293,211, filed Oct. 13, 2016, which claims priority pursuant to 35 U.S.C. §119 (e) to U.S. Provisional Application No. 62/240,783, filed Oct. 13, 2015, which applications are specifically incorporated herein, in their entirety, by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to the reception and use of biometric data, and more particularly, to a system and method for displaying at least one biometric of a user along with video of the user at a time that the at least one biometric is being measured and/or received.

2. Description of Related Art

Recently, devices have been developed that are capable of measuring, sensing, or estimating in a convenient form factor at least one or more metric related to physiological characteristics, commonly referred to as biometric data. For example, devices that resemble watches have been developed which are capable of measuring an individual’s heart rate or pulse, and, using that data together with other information (e.g., the individual’s age, weight, etc.), to calculate a resultant, such as the total calories burned by the individual in a given day. Similar devices have been developed for measuring, sensing, or estimating other kinds of metrics, such as blood pressure, breathing patterns, breath composition, sleep patterns, and blood-alcohol level, to name a few. These devices are generically referred to as biometric devices or biosensor metrics devices.

While the types of biometric devices continue to grow, the way in which biometric data is used remains relatively static. For example, heart rate data is typically used to give an individual information on their pulse and calories burned. By way of another example, blood-alcohol data is typically used to give an individual information on their blood-alcohol level, and to inform the individual on whether or not they can safely or legally operate a motor vehicle. By way of yet another example, an individual’s breathing pattern (measurable for example either by loudness level in decibels, or by variations in decibel level over a time interval) may be monitored by a doctor, nurse, or medical technician to determine whether the individual suffers from sleep apnea.

While biometric data is useful in and of itself, such data would be more informative or dynamic if it could be combined with other data (e.g., video data, etc.), provided (e.g., wirelessly, over a network, etc.) to a remote device, and/or searchable (e.g., allowing certain conditions, such as an elevated heart rate, to be quickly identified) and/or cross-searchable (e.g., using biometric data to identify a video section illustrating a specific characteristic, or vice-versa). Thus, a need exists for an efficient system and method capable of achieving at least some, or indeed all, of the foregoing advantages, and capable also of merging the data generated in either automatic or manual form by the various devices, which are often using operating systems or technologies (e.g., hardware platforms, protocols, data types, etc.) that are incompatible with one another.

In certain embodiments of the present invention, the system and/or method is configured to receive, manage, and filter the quantity of information on a timely and cost-effective basis, and could also be of further value through the accurate measurement, visualization (e.g., synchronized visualization, etc.), and rapid notification of data points which are outside (or within) a defined or predefined range.

Such a system and/or method could be used by an individual (e.g., athlete, etc.) or their trainer, coach, etc., to visualize the individual during the performance of an athletic event (e.g., jogging, biking, weightlifting, playing soccer, etc.) in real-time (live) or afterwards, together with the individual’s concurrently measured biometric data (e.g., heart rate, etc.), and/or concurrently gathered “self-realization data,” or subject-generated experiential data, where the individual inputs their own subjective physical or mental states during their exercise, fitness or sports activity/training (e.g., feeling the onset of an adrenaline “rush” or endorphins in the system, feeling tired, “getting a second wind,” etc.). This would allow a person (e.g., the individual, the individual’s trainer, a third party, etc.) to monitor/observe physiological and/or subjective psychological characteristics of an individual while watching or reviewing the individual in the performance of an athletic event, or other physical activity. Such inputting of the self-realization data, ca be achieved by various methods, including automatically, time-stamped-in-the-system voice notes, short-form or abbreviation key commands on a smart phone, smart watch, enabled fitness band, or any other system-linked input method which is convenient for the individual to utilize so as not to impede (or as little as possible) the flow and practice by the individual of the activity in progress.

Such a system and/or method would also facilitate, for example, remote observation and diagnosis in telemedicine applications, where there is a need for the medical staff, or monitoring party or parent, to have clear and rapid confirmation of the identity of the patient or infant, as well as their visible physical condition, together with their concurrently generated biometric and/or self-realization data.

Furthermore, the system and/or method should also provide the subject, or monitoring party, with a way of using video indexing to efficiently and intuitively benchmark, map and evaluate the subject’s data, both against the subject’s own biometric history and/or against other subjects’ data samples, or demographic comparables, independently of whichever operating platforms or applications have been used to generate the biometric and video information. By being able to filter/search for particular events (e.g., biometric events, self-realization events, physical events, etc.), the acquired data can be reduced down or edited (e.g., to create a “highlight reel,” etc.) while maintaining synchronization between individual video segments and measured and/or gathered data (e.g., biometric data, self-realization data, GPS data, etc.). Such comprehensive indexing of the events, and with it the ability to perform structured aggregation of the related data (video and other) with (or without) data from other individuals or other relevant sources, can also be utilized to provide richer levels of information using methods of “Big Data” analysis and “Machine Learning,” and adding artificial intelligence (“AI”) for the implementation of recommendations and calls to action.

SUMMARY OF THE INVENTION

The present invention provides a system and method for using, processing, indexing, benchmarking, ranking, comparing and displaying biometric data, or a resultant thereof, either alone or together (e.g., in synchronization) with other data (e.g., video data, etc.). Preferred embodiments of the present invention operate in accordance with a computing device (e.g., a smart phone, etc.) in communication with at least one external device (e.g., a biometric device for acquiring biometric data, a video device for acquiring video data, etc.). In a first embodiment of the present invention, video data, which may include audio data, and non-video data, such as biometric data, are stored separately on the computing device and linked to other data, which allows searching and synchronization of the video and non-video data.

In one embodiment of the present invention, an application (e.g., running on the computing device, etc.) includes a plurality of modules for performing a plurality of functions. For example, the application may include a video capture module for receiving video data from an internal and/or external camera, and a biometric capture module for receiving biometric data from an internal and/or external biometric device. The client platform may also include a user interface module, allowing a user to interact with the platform, a video editing module for editing video data, a file handling module for managing data, a database and sync module for replicating data, an algorithm module for processing received data, a sharing module for sharing and/or storing data, and a central login and ID module for interfacing with third party social media websites, such as Facebook™.

These modules can be used, for example, to start a new session, receive video data for the session (i.e., via the video capture module) and receive biometric data for the session (i.e., via the biometric capture module). This data can be stored in local storage, in a local database, and/or on a remote storage device (e.g., in the company cloud or a third-party cloud service, such as Dropbox™, etc.). In a preferred embodiment, the data is stored so that it is linked to information that (i) identifies the session and (ii) enables synchronization.

For example, video data is preferably linked to at least a start time (e.g., a start time of the session) and an identifier. The identifier may be a single number uniquely identifying the session, or a plurality of numbers (e.g., a plurality of global or universal unique identifiers (GUIDs/UUIDs)), where a first number uniquely identifying the session and a second number uniquely identifies an activity within the session, allowing a session to include a plurality of activities. The identifier may also include a session name and/or a session description. Other information about the video data (e.g., video length, video source, etc.) (i.e., “video metadata”) can also be stored and linked to the video data. Biometric data is preferably linked to at least the start time (e.g., the same start time linked to the video data), the identifier (e.g., the same identifier linked to the video data), and a sample rate, which identifies the rate at which biometric data is received and/or stored.

Once the video and biometric data is stored and linked, algorithms can be used to display the data together. For example, if biometric data is stored at a sample rate of 30 samples per minute (spm), algorithms can be used to display a first biometric value (e.g., below the video data, superimposed over the video data, etc.) at the start of the video clip, a second biometric value two seconds later (two seconds into the video clip), a third biometric value two seconds later (four seconds into the video clip), etc. In alternate embodiments of the present invention, non-video data (e.g., biometric data, self-realization data, etc.) can be stored with a plurality of time-stamps (e.g., individual stamps or offsets for each stored value, or individual sample rates for each data type), which can be used together with the start time to synchronize non-video data to video data.

In one embodiment of the present invention, the biometric device may include a sensor for sensing biometric data, a display for interfacing with the user and displaying various information (e.g., biometric data, set-up data, operation data, such as start, stop, and pause, etc.), a memory for storing the sensed biometric data, a transceiver for communicating with the exemplary computing device, and a processor for operating and/or driving the transceiver, memory, sensor, and display. The exemplary computing device includes a transceiver (1) for receiving biometric data from the exemplary biometric device, a memory for storing the biometric data, a display for interfacing with the user and displaying various information (e.g., biometric data, set-up data, operation data, such as start, stop, and pause, input in-session comments or add voice notes, etc.), a keyboard (or other user input) for receiving user input data, a transceiver (2) for providing the biometric data to the host computing device via the Internet, and a processor for operating and/or driving the transceiver (1), transceiver (2), keyboard, display, and memory.

The keyboard (or other input device) in the computing device, or alternatively the keyboard (or other input device) in the biometric device, may be used to enter self-realization data, or data on how the user is feeling at a particular time. For example, if the user is feeling tired, the user may enter the “T” on the keyboard. If the user is feeling their endorphins kick in, the user may enter the “E” on the keyboard. And if the user is getting their second wind, the user may enter the “S” on the keyboard. Alternatively, to further facilitate operation during the exercise, or sporting activity, short-code key buttons such as “T,” “E,” and “S” can be preassigned, like speed-dial telephone numbers for frequently called contacts on a smart phone, etc., which can be selected manually or using voice recognition. This data (e.g., the entry or its representation) is then stored and linked to either a sample rate (like biometric data) or time-stamp data, which may be a time or an offset to the start time that each button was pressed. This would allow the self-realization data to be synchronized to the video data. It would also allow the self-realization data, like biometric data, to be searched or filtered (e.g., in order to find video corresponding to a particular event, such as when the user started to feel tired, etc.).

In an alternate embodiment of the present invention, the computing device (e.g., a smart phone, etc.) is also in communication with a host computing device via a wide area network (“WAN”), such as the Internet. This embodiment allows the computing device to download the application from the host computing device, offload at least some of the above-identified functions to the host computing device, and store data on the host computing device (e.g., allowing video data, alone or synchronized to non-video data, such as biometric data and self-realization data, to be viewed by another networked device). For example, the software operating on the computing device (e.g., the application, program, etc.) may allow the user to play the video and/or audio data, but not to synchronize the video and/or audio data to the biometric data. This may be because the host computing device is used to store data critical to synchronization (time-stamp index, metadata, biometric data, sample rate, etc.) and/or software operating on the host computing device is necessary for synchronization. By way of another example, the software operating on the computing device may allow the user to play the video and/or audio data, either alone or synchronized with the biometric data, but may not allow the computing device (or may limit the computing device’s ability) to search or otherwise extrapolate from, or process the biometric data to identify relevant portions (e.g., which may be used to create a “highlight reel” of the synchronized video/audio/biometric data) or to rank the biometric and/or video data. This may be because the host computing device is used to store data critical to search and/or to rank the biometric data (biometric data, biometric metadata, etc.), and/or software necessary for searching (or performing advanced searching of) and/or ranking (or performing advanced ranking of) the biometric data.

In one embodiment of the present invention, the video data, which may also include audio data, starts at a time “T” and continues for a duration of “n.” The video data is preferably stored in memory (locally and/or remotely) and linked to other data, such as an identifier, start time, and duration. Such data ties the video data to at least a particular session, a particular start time, and identifies the duration of the video included therein. In one embodiment of the present invention, each session can include different activities. For example, a trip to Berlin on a particular day (session) may involve a bike ride through the city (first activity) and a walk through a park (second activity). Thus, the identifier may include both a session identifier, uniquely identifying the session via a globally unique identifier (GUID), and an activity identifier, uniquely identifying the activity via a globally unique identifier (GUID), where the session/activity relationship is that of a parent/child.

In one embodiment of the present invention, the biometric data is stored in memory and linked to the identifier and a sample rate “m.” This allows the biometric data to be linked to video data upon playback. For example, if identifier is one, start time is 1:00 PM, video duration is one minute, and the sample rate is 30 spm, then the playing of the video at 2:00 PM would result in the first biometric value to be displayed (e.g., below the video, over the video, etc.) at 2:00 PM, the second biometric value to be displayed (e.g., below the video, over the video, etc.) two seconds later, and so on until the video ends at 2:01 PM. While self-realization data can be stored like biometric data (e.g., linked to a sample rate), if such data is only received periodically, it may be more advantageous to store this data linked to the identifier and a time-stamp, where “m” is either the time that the self-realization data was received or an offset between this time and the start time (e.g., ten minutes and four seconds after the start time, etc.). By storing video and non-video data separately from one another, data can be easily search and synchronized.

With respect to linking data to an identifier, which may be linked to other data (e.g., start time, sample rate, etc.), if the data is received in real-time, the data can be linked to the identifier (s) for the current session (and/or activity). However, when data is received after the fact (e.g., after a session has ended), there are several ways in which the data can be linked to a particular session and/or activity (or identifier (s) associated therewith). The data can be manually linked (e.g., by the user) or automatically linked via the application. With respect to the latter, this can be accomplished, for example, by comparing the duration of the received data (e.g., the video length) with the duration of the session and/or activity, by assuming that the received data is related to the most recent session and/or activity, or by analyzing data included within the received data. For example, in one embodiment, data included with the received data (e.g., metadata) may identify a time and/or location associated with the data, which can then be used to link the received data to the session and/or activity. In another embodiment, the computing device could display data (e.g., a barcode, such as a QR code, etc.) that identifies the session and/or activity. An external video recorder could record the identifying data (as displayed by the computing device) along with (e.g., before, after, or during) the user and/or his/her surroundings. The application could then search the video data for identifying data, and use this data to link the video data to a session and/or activity. The identifying portion of the video data could then be deleted by the application if desired.

A more complete understanding of a system and method for using, processing, and displaying biometric data, or a resultant thereof, will be afforded to those skilled in the art, as well as a realization of additional advantages and objects thereof, by a consideration of the following detailed description of the preferred embodiment. Reference will be made to the appended sheets of drawings, which will first be described briefly.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates a system for using, processing, and displaying biometric data, and for synchronizing biometric data with other data (e.g., video data, audio data, etc.) in accordance with one embodiment of the present invention;

FIG. 2A illustrates a system for using, processing, and displaying biometric data, and for synchronizing biometric data with other data (e.g., video data, audio data, etc.) in accordance with another embodiment of the present invention;

FIG. 2B illustrates a system for using, processing, and displaying biometric data, and for synchronizing biometric data with other data (e.g., video data, audio data, etc.) in accordance with yet another embodiment of the present invention;

FIG. 3 illustrates an exemplary display of video data synchronized with biometric data in accordance with one embodiment of the present invention;

FIG. 4 illustrates a block diagram for using, processing, and displaying biometric data, and for synchronizing biometric data with other data (e.g., video data, audio data, etc.) in accordance with one embodiment of the present invention;

FIG. 5 illustrates a block diagram for using, processing, and displaying biometric data, and for synchronizing biometric data with other data (e.g., video data, audio data, etc.) in accordance with another embodiment of the present invention;

FIG. 6 illustrates a method for synchronizing video data with biometric data, operating the video data, and searching the biometric data, in accordance with one embodiment of the present invention;

FIG. 7 illustrates an exemplary display of video data synchronized with biometric data in accordance with another embodiment of the present invention;

FIG. 8 illustrates exemplary video data, which is preferably linked to an identifier (ID), a start time (T), and a finish time or duration (n);

FIG. 9 illustrates an exemplary identifier (ID), comprising a session identifier and an activity identifier;

FIG. 10 illustrates exemplary biometric data, which is preferably linked to an identifier (ID), a start time (T), and a sample rate (S);

FIG. 11 illustrates exemplary self-realization data, which is preferably linked to an identifier (ID) and a time (m);

FIG. 12 illustrates how sampled biometric data points can be used to extrapolate other biometric data point in accordance with one embodiment of the present invention;

FIG. 13 illustrates how sampled biometric data points can be used to extrapolate other biometric data points in accordance with another embodiment of the present invention;

FIG. 14 illustrates an example of how a start time and data related thereto (e.g., sample rate, etc.) can be used to synchronized biometric data and self-realization data to video data;

FIG. 15 depicts an exemplary “sign in” screen shot for an application that allows a user to capture at least video and biometric data of the user performing an athletic event (e.g., bike riding, etc.) and to display the video data together (or in synchronization) with the biometric data;

FIG. 16 depict an exemplary “create session” screen shot for the application depicted in FIG. 15, allowing the user to create a new session;

FIG. 17 depicts an exemplary “session name” screen shot for the application depicted in FIG. 15, allowing the user to enter a name for the session;

FIG. 18 depicts an exemplary “session description” screen shot for the application depicted in FIG. 15, allowing the user to enter a description for the session;

FIG. 19 depicts an exemplary “session started” screen shot for the application depicted in FIG. 15, showing the video and biometric data received in real-time;

FIG. 20 depicts an exemplary “review session” screen shot for the application depicted in FIG. 15, allowing the user to playback the session at a later time;

FIG. 21 depicts an exemplary “graph display option” screen shot for the application depicted in FIG. 15, allowing the user to select data (e.g., heart rate data, etc.) to be displayed along with the video data;

FIG. 22 depicts an exemplary “review session” screen shot for the application depicted in FIG. 15, where the video data is displayed together (or in synchronization) with the biometric data;

FIG. 23 depicts an exemplary “map” screen shot for the application depicted in FIG. 15, showing GPS data displayed on a Google map;

FIG. 24 depicts an exemplary “summary” screen shot for the application depicted in FIG. 15, showing a summary of the session;

FIG. 25 depicts an exemplary “biometric search” screen shot for the application depicted in FIG. 15, allowing a user to search the biometric data for particular biometric event (e.g., a particular value, a particular range, etc.);

FIG. 26 depicts an exemplary “first result” screen shot for the application depicted in FIG. 15, showing a first result for the biometric event shown in FIG. 25, together with corresponding video;

FIG. 27 depicts an exemplary “second result” screen shot for the application depicted in FIG. 15, showing a second result for the biometric event shown in FIG. 25, together with corresponding video;

FIG. 28 depicts an exemplary “session search” screen shot for the application depicted in FIG. 15, allowing a user to search for sessions that meet certain criteria; and

FIG. 29 depicts an exemplary “list” screen shot for the application depicted in FIG. 15, showing a result for the criteria shown in FIG. 28.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

The present invention provides a system and method for using, processing, indexing, benchmarking, ranking, comparing and displaying biometric data, or a resultant thereof, either alone or together (e.g., in synchronization) with other data (e.g., video data, etc.). It should be appreciated that while the invention is described herein in terms of certain biometric data (e.g., heart rate, breathing patterns, blood-alcohol level, etc.), the invention is not so limited, and can be used in conjunction with any biometric and/or physical data, including, but not limited to oxygen levels, CO2 levels, oxygen saturation, blood pressure, blood glucose, lung function, eye pressure, body and ambient conditions (temperature, humidity, light levels, altitude, and barometric pressure), speed (walking speed, running speed), location and distance travelled, breathing rate, heart rate variance (HRV), EKG data, perspiration levels, calories consumed and/or burnt, ketones, waste discharge content and/or levels, hormone levels, blood content, saliva content, audible levels (e.g., snoring, etc.), mood levels and changes, galvanic skin response, brain waves and/or activity or other neurological measurements, sleep patterns, physical characteristics (e.g., height, weight, eye color, hair color, iris data, fingerprints, etc.) or responses (e.g., facial changes, iris (or pupal) changes, voice (or tone) changes, etc.), or any combination or resultant thereof.

As shown in FIG. 1, a biometric device 110 may be in communication with a computing device 108, such as a smart phone, which, in turn, is in communication with at least one computing device (102, 104, 106) via a wide area network (“WAN”) 100, such as the Internet. The computing devices can be of different types, such as a PC, laptop, tablet, smart phone, smart watch etc., using one or different operating systems or platforms. In one embodiment of the present invention, the biometric device 110 is configured to acquire (e.g., measure, sense, estimate, etc.) an individual’s heart rate (e.g., biometric data). The biometric data is then provided to the computing device 108, which includes a video and/or audio recorder (not shown).

In a first embodiment of the present invention, the video and/or audio data are provided along with the heart rate data to a host computing device 106 via the network 100. Because the concurrent video and/or audio data and the heart rate data are provided to the host computing device 106, a host application operating thereon (not shown) can be used to synchronize the video data, audio data, and/or heart rate data, thereby allowing a user (e.g., via the user computing devices 102, 104) to view the video data and/or listen to the audio data (either in real-time or time delayed) while viewing the biometric data. For example, as shown in FIG. 3, the host application may use a time-stamp 320, or other sequencing method using metadata, to synchronize the video data 310 with the biometric data 330, allowing a user to view, for example, an individual (e.g., patient in a hospital, baby in a crib, etc.) at a particular time 340 (e.g., 76 seconds past the start time) and biometric data associated with the individual at that particular time 340 (e.g., 76 seconds past the start time).

It should be appreciated that the host application may further be configured to perform other functions, such as search for a particular activity in video data, audio data, biometric data and/or metadata, and/or ranking video data, audio data, and/or biometric data. For example, the host application may allow the user to search for a particular biometric event, such as a heart rate that has exceeded a particular threshold or value, a heart rate that has dropped below a particular threshold or value, a particular heart rate (or range) for a minimum period of time, etc. By way of another example, the host application may rank video data, audio data, biometric data, or a plurality of synchronized clips (e.g., highlight reels) chronologically, by biometric magnitude (highest to lowest, lowest to highest, etc.), by review (best to worst, worst to best, etc.), or by views (most to least, least to most, etc.). It should further be appreciated that such functions as the ranking, searching, and analysis of data is not limited to a user’s individual session, but can be performed across any number of individual sessions of the user, as well as the session or number of sessions of multiple users. One use of this collection of all the various information (video, biometric and other) is to be able to generate sufficient data points for Big Data analysis and Machine Learning of the purposes of generating AI inferences and recommendations.

By way of example, machine learning algorithms could be used to search through video data automatically, looking for the most compelling content which would subsequently be stitched together into a short “highlight reel.” The neural network could be trained using a plurality of sports videos, along with ratings from users of their level of interest as the videos progress. The input nodes to the network could be a sample of change in intensity of pixels between frames along with the median excitement rating of the current frame. The machine learning algorithms could also be used, in conjunction with a multi-layer convolutional neural network, to automatically classify video content (e.g., what sport is in the video). Once the content is identified, either automatically or manually, algorithms can be used to compare the user’s activity to an idealized activity. For example, the system could compare a video recording of the user’s golf swing to that of a professional golfer. The system could then provide incremental tips to the user on how the user could improve their swing. Algorithms could also be used to predict fitness levels for users (e.g., if they maintain their program, giving them an incentive to continue working out), match users to other users or practitioners having similar fitness levels, and/or create routines optimized for each user.

It should also be appreciated, as shown in FIG. 2A, that the biometric data may be provided to the host computing device 106 directly, without going through the computing device 108. For example, the computing device 108 and the biometric device 110 may communicate independently with the host computing device, either directly or via the network 100. It should further be appreciated that the video data, the audio data, and/or the biometric data need not be provided to the host computing device 106 in real-time. For example, video data could be provided at a later time as long as the data can be identified, or tied to a particular session. If the video data can be identified, it can then be synchronized to other data (e.g., biometric data) received in real-time.

In one embodiment of the present invention, as shown in FIG. 2B, the system includes a computing device 200, such as a smart phone, in communication with a plurality of devices, including a host computing device 240 via a WAN (see, e.g., FIG. 1 at 100), third party devices 250 via the WAN (see, e.g., FIG. 1 at 100), and local devices 230 (e.g., via wireless or wired connections). In a preferred embodiment, the computing device 200 downloads a program or application (i.e., client platform) from the host computing device 240 (e.g., company cloud). The client platform includes a plurality of modules that are configured to perform a plurality of functions.

For example, the client platform may include a video capture module 210 for receiving video data from an internal and/or external camera, and a biometric capture module 212 for receiving biometric data from an internal and/or external biometric device. The client platform may also include a user interface module 202, allowing a user to interact with the platform, a video editing module 204 for editing video data, a file handling module 206 for managing (e.g., storing, linking, etc.) data (e.g., video data, biometric data, identification data, start time data, duration data, sample rate data, self-realization data, time-stamp data, etc.), a database and sync module 214 for replicating data (e.g., copying data stored on the computing device 200 to the host computing device 240 and/or copying user data stored on the host computing device 240 to the computing device 200), an algorithm module 216 for processing received data (e.g., synchronizing data, searching/filtering data, creating a highlight reel, etc.), a sharing module 220 for sharing and/or storing data (e.g., video data, highlight reel, etc.) relating either to a single session or multiple sessions, and a central login and ID module 218 for interfacing with third party social media websites, such as Facebook™.

With respect to FIG. 2B, the computing device 200, which may be a smart phone, a tablet, or any other computing device, may be configured to download the client platform from the host computing device 240. Once the client platform is running on the computing device 200, the platform can be used to start a new session, receive video data for the session (i.e., via the video capture module 210) and receive biometric data for the session (i.e., via the biometric capture module 212). This data can be stored in local storage, in a local database, and/or on a remote storage device (e.g., in the company cloud or a third-party cloud, such as Dropbox™, etc.). In a preferred embodiment, the data is stored so that it is linked to information that (i) identifies the session and (ii) enables synchronization.

For example, video data is preferably linked to at least a start time (e.g., a start time of the session) and an identifier. The identifier may be a single number uniquely identifying the session, or a plurality of numbers (e.g., a plurality of globally (or universally) unique identifiers (GUIDs/UUIDs), where a first number uniquely identifying the session and a second number uniquely identifies an activity within the session, allowing a session (e.g., a trip to or an itinerary in a destination, such as Berlin) to include a plurality of activities (e.g., a bike ride, a walk, etc.). By way of example only, an activity (or session) identifier may be a 128 bit identifier that has a high probability of uniqueness, such as 8bf25512-f17a-4e9e-b49a-7c3f59ec1e85). The identifier may also include a session name and/or a session description. Other information about the video data (e.g., video length, video source, etc.) (i.e., “video metadata”) can also be stored and linked to the video data. Biometric data is preferably linked to at least the start time (e.g., the same start time linked to the video data), the identifier (e.g., the same identifier linked to the video data), and a sample rate, which identifies the rate at which biometric data is received and/or stored. For example, heart rate data may be received and stored at a rate of thirty samples per minute (30 spm), i.e., once every two seconds, or some other predetermined time interval sample.

In some cases, the sample rate used by the platform may be the sample rate of the biometric device (i.e., the rate at which data is provided by the biometric device). In other cases, the sample rate used by the platform may be independent from the rate at which data is received (e.g., a fixed rate, a configurable rate, etc.). For example, if the biometric device is configured to provide biometric data at a rate of sixty samples per minute (60 spm), the platform may still store the data at a rate of 30 spm. In other words, with a sample rate of 30 spm, the platform will have stored five values after ten seconds, the first value being the second value transmitted by the biometric device, the second value being the fourth value transmitted by the biometric device, and so on. Alternatively, if the biometric device is configured to provide biometric data only when the biometric data changes, the platform may still store the data at a rate of 30 spm. In this case, the first value stored by the platform may be the first value transmitted by the biometric device, the second value stored may be the first value transmitted by the biometric device if at the time of storage no new value has been transmitted by the biometric device, the third value stored may be the second value transmitted by the biometric device if at the time of storage a new value is being transmitted by the biometric device, and so on.

Once the video and biometric data is stored and linked, algorithms can be used to display the data together. For example, if biometric data is stored at a sample rate of 30 spm, which may be fixed or configurable, algorithms (e.g., 216) can be used to display a first biometric value (e.g., below the video data, superimposed over the video data, etc.) at the start of the video clip, a second biometric value two seconds later (two seconds into the video clip), a third biometric value two seconds later (four seconds into the video clip), etc. In alternate embodiments of the present invention, non-video data (e.g., biometric data, self-realization data, etc.) can be stored with a plurality of time-stamps (e.g., individual stamps or offsets for each stored value), which can be used together with the start time to synchronize non-video data to video data.

It should be appreciated that while the client platform can be configured to function autonomously (i.e., independent of the host network device 240), in one embodiment of the present invention, certain functions of the client platform are performed by the host network device 240, and can only be performed when the computing device 200 is in communication with the host computing device 240. Such an embodiment is advantageous in that it not only offloads certain functions to the host computing device 240, but it ensures that these functions can only be performed by the host computing device 240 (e.g., requiring a user to subscribe to a cloud service in order to perform certain functions). Functions offloaded to the cloud may include functions that are necessary to display non-video data together with video data (e.g., the linking of information to video data, the linking of information to non-video data, synchronizing non-video data to video data, etc.), or may include more advanced functions, such as generating and/or sharing a “highlight reel.” In alternate embodiments, the computing device 200 is configured to perform the foregoing functions as long as certain criteria has been met. This criteria may include the computing device 200 being in communication with the host computing device 240, or the computing device 200 previously being in communication with the host computing device 240 and the period of time since the last communication being equal to or less than a predetermined amount of time. Technology known to those skilled in the art (e.g., using a keyed hash-based method authentication code (HMAC), a stored time of said last communication (allowing said computing device to determine whether said delta is less than a predetermined amount of time), etc.) can be used to ensure that this criteria is met before allowing the performance of certain functions.

Block diagrams of an exemplary computing device and an exemplary biometric device are shown in FIG. 5. In particular, the exemplary biometric device 500 includes a sensor for sensing biometric data, a display for interfacing with the user and displaying various information (e.g., biometric data, set-up data, operation data, such as start, stop, and pause, etc.), a memory for storing the sensed biometric data, a transceiver for communicating with the exemplary computing device 600, and a processor for operating and/or driving the transceiver, memory, sensor, and display. The exemplary computing device 600 includes a transceiver (1) for receiving biometric data from the exemplary biometric device 500 (e.g., using any of telemetry, any WiFi standard, DNLA, Apple AirPlay, Bluetooth, near field communication (NFC), RFID, ZigBee, Z-Wave, Thread, Cellular, a wired connection, infrared or other method of data transmission, datacasting or streaming, etc.), a memory for storing the biometric data, a display for interfacing with the user and displaying various information (e.g., biometric data, set-up data, operation data, such as start, stop, and pause, input in-session comments or add voice notes, etc.), a keyboard for receiving user input data, a transceiver (2) for providing the biometric data to the host computing device via the Internet (e.g., using any of telemetry, any WiFi standard, DNLA, Apple AirPlay, Bluetooth, near field communication (NFC), RFID, ZigBee, Z-Wave, Thread, Cellular, a wired connection, infrared or other method of data transmission, datacasting or streaming, etc.), and a processor for operating and/or driving the transceiver (1), transceiver (2), keyboard, display, and memory.

The keyboard in the computing device 600, or alternatively the keyboard in biometric device 500, may be used to enter self-realization data, or data on how the user is feeling at a particular time. For example, if the user is feeling tired, the user may hit the “T” button on the keyboard. If the user is feeling their endorphins kick in, the user may hit the “E” button on the keyboard. And if the user is getting their second wind, the user may hit the “S” button on the keyboard. This data is then stored and linked to either a sample rate (like biometric data) or time-stamp data, which may be a time or an offset to the start time that each button was pressed. This would allow the self-realization data, in the same way as the biometric data, to be synchronized to the video data. It would also allow the self-realization data, like the biometric data, to be searched or filtered (e.g., in order to find video corresponding to a particular event, such as when the user started to feel tired, etc.).

It should be appreciated that the present invention is not limited to the block diagrams shown in FIG. 5, and a biometric device and/or a computing device that includes fewer or more components is within the spirit and scope of the present invention. For example, a biometric device that does not include a display, or includes a camera and/or microphone is within the spirit and scope of the present invention, as are other data-entry devices or methods beyond a keyboard, such as a touch screen, digital pen, voice/audible recognition device, gesture recognition device, so-called “wearable,” or any other recognition device generally known to those skilled in the art. Similarly, a computing device that only includes one transceiver, further includes a camera (for capturing video) and/or microphone (for capturing audio or for performing spatial analytics through recording or measurement of sound and how it travels), or further includes a sensor (see FIG. 4) is within the spirit and scope of the present invention. It should also be appreciated that self-realization data is not limited to how a user feels, but could also include an event that the user or the application desires to memorialize. For example, the user may want to record (or time-stamp) the user biking past wildlife, or a particular architectural structure, or the application may want to record (or time-stamp) a patient pressing a “request nurse” button, or any other sensed non-biometric activity of the user.

Referring back to FIG. 1, as discussed above in conjunction with FIG. 2B, the host application (or client platform) may operate on the computing device 108. In this embodiment, the computing device 108 (e.g., a smart phone) may be configured to receive biometric data from the biometric device 110 (either in real-time, or at a later stage, with a time-stamp corresponding to the occurrence of the biometric data), and to synchronize the biometric data with the video data and/or the audio data recorded by the computing device 108 (or a camera and/or microphone operating thereon). It should be appreciated that in this embodiment of the present invention, other than the host application being run locally (e.g., on the computing device 108), the host application (or client platform) operates as previously discussed.

Again, with reference to FIG. 1, in another embodiment of the present invention, the computing device 108 further includes a sensor for sensing biometric data. In this embodiment of the present invention, the host application (or client platform) operates as previously discussed (locally on the computing device 108), and functions to at least synchronize the video, audio, and/or biometric data, and allow the synchronized data to be played or presented to a user (e.g., via a display portion, via a display device connected directly to the computing device, via a user computing device connected to the computing device (e.g., directly, via the network, etc.), etc.).

It should be appreciated that the present invention, in any embodiment, is not limited to the computing devices (number or type) shown in FIGS. 1 and 2, and may include any of a computing, sensing, digital recording, GPS or otherwise location-enabled device (for example, using WiFi Positioning Systems “WPS”, or other forms of deriving geographical location, such as through network triangulation), generally known to those skilled in the art, such as a personal computer, a server, a laptop, a tablet, a smart phone, a cellular phone, a smart watch, an activity band, a heart-rate strap, a mattress sensor, a shoe sole sensor, a digital camera, a near field sensor or sensing device, etc. It should also be appreciated that the present invention is not limited to any particular biometric device, and includes biometric devices that are configured to be worn on the wrist (e.g., like a watch), worn on the skin (e.g., like a skin patch) or scalp, or incorporated into computing devices (e.g., smart phones, etc.), either integrated in, or added to items such as bedding, wearable devices such as clothing, footwear, helmets or hats, or ear phones, or athletic equipment such as rackets, golf clubs, or bicycles, where other kinds of data, including physical performance metrics such as racket or club head speed, or pedal rotation/second, or footwear recording such things as impact zones, gait or shear, can also be measured synchronously with biometrics, and synchronized to video. Other data can also be measured synchronously with video data, including biometrics on animals (e.g., a bull’s acceleration or pivot or buck in a bull riding event, a horse’s acceleration matched to heart rate in a horse race, etc.), and physical performance metrics of inanimate objects, such a revolutions/minute (e.g., in a vehicle, such as an automobile, a motorcycle, etc.), miles/hour (or the like) (e.g., in a vehicle, such as an automobile, a motorcycle, etc., a bicycle, etc.), or G-forces (e.g., experienced by the user, an animal, and inanimate object, etc.). All of this data (collectively “non-video data,” which may include metadata, or data on non-video data) can be synchronized to video data using a sample rate and/or at least one time-stamp, as discussed above.

It should further be appreciated that the present invention need not operate in conjunction with a network, such as the Internet. For example, as shown in FIG. 2A, the biometric device 110, which may be, for example, be a wireless activity band for sensing heart rate, and the computing device 108, which may be, for example, a digital video recorder, may be connected directly to the host computing device 106 running the host application (not shown), where the host application functions as previously discussed. In this embodiment, the video, audio, and/or biometric data can be provided to the host application either (i) in real time, or (ii) at a later time, since the data is synchronized with a sample rate and/or time-stamp. This would allow, for example, at least video of an athlete, or a sportsman or woman (e.g., a football player, a soccer player, a racing driver, etc.) to be shown in action (e.g., playing football, playing soccer, motor racing, etc.) along with biometric data of the athlete in action (see, e.g., FIG. 7). By way of example only, this would allow a user to view a soccer player’s heart rate 730 as the soccer player dribbles a ball, kicks the ball, heads the ball, etc. This can be accomplished using a time stamp 720 (e.g., start time, etc.), or other sequencing method using metadata (e.g., sample rate, etc.), to synchronize the video data 710 with the biometric data 730, allowing the user to view the soccer player at a particular time 740 (e.g., 76 seconds) and biometric data associated with the athlete at that particular time 340 (e.g., 76 seconds). Similar technology can be used to display biometric data on other athletes, card players, actors, online garners, etc.

Where it is desirable to monitor or watch more than one individual from a camera view, for example, patients in a hospital ward being observed from a remote nursing station or, during a televised broadcast of a sporting event such as a football game, with multiple players on the sports field, the system can be so configured, by the subjects using Bluetooth or other wearable or NFC sensors (in some cases with their sensing capability also being location-enabled in order to identify which specific individual to track) capable of transmitting their biometrics over practicable distances, in conjunction with relays or beacons if necessary, such that the viewer can switch the selection of which of one or multiple individuals’ biometric data to track, alongside the video or broadcast, and, if wanted and where possible within the limitations of the video capture field of the camera used, also to concentrate the view of the video camera on a reduced group or on a specific individual. In an alternate embodiment of the present invention, selection of biometric data is automatically accomplished, for example, based on the individual’s location in the video frame (e.g., center of the frame), rate of movement (e.g., moving quicker than other individuals), or proximity to a sensor (e.g., being worn by the individual, embedded in the ball being carried by the individual, etc.), which may be previously activate or activated by a remote radio frequency signal. Activation of the sensor may result in biometric data of the individual being transmitted to a receiver, or may allow the receiver to identified biometric data of the individual amongst other data being transmitted (e.g., biometric data from other individuals).

In the context of fitness or sports tracking, it should be appreciated that the capturing of an individual’s activity on video is not dependent on the presence of a third party to do this, but various methods of self-videoing can be envisaged, such as a video capture device mounted on the subject’s wrist or a body harness, or on a selfie attachment or a gimbal, or fixed to an object (e.g., sports equipment such as bicycle handlebars, objects found in sporting environments such as a basketball or tennis net, a football goal post, a ceiling, etc., a drone-borne camera following the individual, a tripod, etc.). It should be further noted that such video capture devices can include more than one camera lens, such that not only the individual’s activity may be videoed, but also simultaneously a different view, such as what the individual is watching or sees in front of them (i.e., the user’s surroundings). The video capture device could also be fitted with a convex mirror lens, or have a convex mirror added as an attachment on the front of the lens, or be a full 360 degree camera, or multiple 360 cameras linked together, such that either with or without the use of specialized software known in the art, a 360 degree all-around or surround view can be generated, or a 360 global view in all axes can be generated.

In the context of augmented or virtual reality, where the individual is wearing suitably equipped augmented reality (“AR”) or virtual reality (“VR”) glasses, goggles, headset or is equipped with another type of viewing display capable of rendering AR, VR, or other synthesized or real 3D imagery, the biometric data such as heart rate from the sensor, together with other data such as, for example, work-out run or speed, from a suitably equipped sensor, such as an accelerometer capable of measuring motion and velocity, could be viewable by the individual, superimposed on their viewing field. Additionally an avatar of the individual in motion could be superimposed in front of the individual’s viewing field, such that they could monitor or improve their exercise performance, or otherwise enhance the experience of the activity by viewing themselves or their own avatar, together (e.g., synchronized) with their performance (e.g., biometric data, etc.). Optionally, the biometric data also of their avatar, or the competing avatar, could be simultaneously displayed in the viewing field. In addition (or alternatively), at least one additional training or competing avatar can be superimposed on the individual’s view, which may show the competing avatar (s) in relation to the individual (e.g., showing them superimposed in front of the individual, showing them superimposed to the side of the user, showing them behind the individual (e.g., in a rear-view-mirror portion of the display, etc.), and/or showing them in relation to the individual (e.g., as blips on a radar-screen portion of the display, etc.), etc. Competing avatar (s), either of real people such as their friends or training acquaintances, can be used to motivate the user to improve or correct their performance and/or to make their exercise routine more interesting (e.g., by allowing the individual to “compete” in the AR, VR, or Mixed Reality (“MR”) environment while exercising, or training, or virtually “gamifying” their activity through the visualization of virtual destinations or locations, imagined or real, such as historical sites, scanned or synthetically created through computer modeling).

Additionally, any multimedia sources to which the user is being exposed whilst engaging in the activity which is being tracked and recorded, should similarly be able to be recorded with the time stamp, for analysis and/or correlation of the individual’s biometric response. An example of an application of this could be in the selection of specific music tracks for when someone is carrying out a training activity, where the correlation of the individual’s past response, based, for example, on heart rate (and how well they achieved specific performance levels or objectives) to music type (e.g., the specific music track (s), a track (s) similar to the specific track (s), a track (s) recommended or selected by others who have listened to or liked the specific track (s), etc.) is used to develop a personalized algorithm, in order to optimize automated music selection to either enhance the physical effort, or to maximize recovery during and after exertion. The individual could further specify that they wished for the specific track or music type, based upon the personalized selection algorithm, to be played based upon their geographical location; an example of this would be someone who frequently or regularly uses a particular circuit for training or recreational purposes. Alternatively, tracks or types of music could be selected through recording or correlation of past biometric response in conjunction with self-realization inputting when particular tracks were being listened to.

It should be appreciated that biometric data does not need to be linked to physical movement or sporting activity, but may instead be combined with video of an individual at a fixed location (e.g., where the individual is being monitored remotely or recorded for subsequent review), for example, as shown in FIG. 3, for health reasons or a medical condition, such as in their home or in hospital, or a senior citizen in an assisted-living environment, or a sleeping infant being monitored by parents whilst in another room or location.

Alternatively, the individual might be driving past or in the proximity of a park or a shopping mall, with their location being recorded, typically by geo-stamping, or additional information being added by geo-tagging, such as the altitude or weather at the specific location, together with what the information or content is, being viewed or interacted with by the individual (e.g., a particular advertisement, a movie trailer, a dating profile, etc.) on the Internet or a smart/enabled television, or on any other networked device incorporating a screen, and their interaction with that information or content, being viewable or recorded by video, in conjunction with their biometric data, with all these sources of data being able to be synchronized for review, by virtue of each of these individual sources being time-stamped or the like (e.g., sampled, etc.). This would allow a third party (e.g., a service provider, an advertiser, a provider of advertisements, a movie production company/promoter, a poster of a dating profile, a dating site, etc.) to acquire for analysis of their response, the biometric data associated with the viewing of certain data by the viewer, where either the viewer or their profile could optionally be identifiable by the third party’s system, or where only the identity of the viewer’s interacting device is known, or can be acquired from the biometric sending party’s GPS, or otherwise location-enabled, device.

For example, an advertiser or an advertisement provider could see how people are responding to an advertisement, or a movie production company/promoter could evaluate how people are responding to a movie trailer, or a poster of a dating profile or the dating site itself, could see how people are responding to the dating profile. Alternatively, viewers of online players of an online gaming or eSports broadcast service such as twitch.tv, or of a televised or streamed online poker game, could view the active participants’ biometric data simultaneously with the primary video source as well as the participants’ visible reactions or performance. As with video/audio, this can either be synchronized in real-time, or synchronized later using the embedded time-stamp or the like (e.g., sample rate, etc.). Additionally, where facial expression analysis is being generated from the source video, for example in the context of measuring an individual’s response to advertising messages, since the video is already time-stamped (e.g., with a start time), the facial expression data can be synchronized and correlated to the physical biometric data of the individual, which has similarly been time-stamped and/or sampled,

As previously discussed, the host application may be configured to perform a plurality of functions. For example, the host application may be configured to synchronize video and/or audio data with biometric data. This would allow, for example, an individual watching a sporting event (e.g., on a TV, computer screen, etc.) to watch how each player’s biometric data changes during play of the sporting event, or also to map those biometric data changes to other players or other comparison models. Similarly, a doctor, nurse, or medical technician could record a person’s sleep habits, and watch, search or later review, the recording (e.g., on a TV, computer screen, etc.) while monitoring the person’s biometric data. The system could also use machine learning to build a profile for each patient, identifying certain characteristics of the patient (e.g., their heart rate rhythm, their breathing pattern, etc.) and notify a doctor, a nurse, or medical technician or trigger an alarm if the measured characteristics appear abnormal or irregular.

The host application could also be configured to provide biometric data to a remote user via a network, such as the Internet. For example, a biometric device (e.g., a smart phone with a blood-alcohol sensor) could be used to measure a person’s blood-alcohol level (e.g., while the person is talking to the remote user via the smart phone), and to provide the person’s blood-alcohol level to the remote user. By placing the sensor near, or incorporating it in the microphone, such a system would allow a parent to determine whether their child has been drinking alcohol by participating in a telephone or video call with their child. Different sensors known in the art could be used to sense different chemicals in the person’s breath, or detect people’s breathing patterns through analysis of sound and speed variations, allowing the monitoring party to determine whether the subject has been using alcohol or other controlled substances or to conduct breath analysis for other diagnostic reasons.

The system could also be adapted with a so-called “lab on a chip” (LOC) integrated in the device itself, or with a suitable attachment added to it, for the remote testing for example, of blood samples where the smart-phone is either used for the collection and sending of the sample to a testing laboratory for analysis, or is used to carry out the sample collection and analysis within the device itself. In either case the system is adapted such that the identity of the subject and their blood sample are cross-authenticated for the purposes of sample and analysis integrity as well as patient identity certainty, through the simultaneous recording of the time-stamped video and time and/or location (or GPS) stamping of the sample at the point of collection and/or submission of the sample. This confirmation of identity is particularly important for regulatory, record keeping and health insurance reasons in the context of telemedicine, since the individual will increasingly be performing functions which, till now, have been carried out typically on-site at the relevant facility, by qualified and regulated medical or laboratory staff, rather than by the subject using a networked device, either for upload to the central analysis facility, or for remote analysis on the device itself.

This, or the collection of other biometric data such as heart rate or blood pressure, could also be applied in situations where it is critical for safety reasons, to check, via regular remote video monitoring in real time, whether say a pilot of a plane, a truck or train driver, are in fit and sound condition to be in control of their vehicle or vessel or whether for example they are experiencing a sudden incapacity or heart attack etc. Because the monitored person is being videoed at the same time as providing time-stamped, geo-stamped and/or sampled biometric data, there is less possibility for the monitored person or a third party, to “trick”, “spoof” or bypass the system. In a patient/doctor remote consultation setting, the system could be used for secure video consults where also, from a regulatory or health insurance perspective, the consultation and its occurrence is validated through the time and/or geo stamp validation. Furthermore, where there is a requirement for a higher level of authentication, the system could further be adapted to use facial recognition or biometric algorithms, to ensure that the correct person is being monitored, or facial expression analysis could be used for behavioral pattern assessment.

The concern that a monitored party would not wish to be permanently monitored (e.g., a senior citizen not wanting to have their every move and action continuously videoed) could be mitigated by the incorporation of various additional features. In one embodiment, the video would be permanently recording in a loop system which uses a reserved memory space, recording for a predetermined time period, and then, automatically erasing the video, where n represents the selected minutes in the loop and E is the event which prevents the recorded loop of n minutes being erased, and triggers both the real time transmission of the visible state or actions of the monitored person to the monitoring party, as well as the ability to rewind, in order for the monitoring party to be able to review the physical manifestation leading up to E. The trigger mechanism for E could be, for example, the occurrence of biometric data outside the predefined range, or the notification of another anomaly such as a fall alert, activated by movement or location sensors such as a gyroscope, accelerometer or magnetometer within the health band device worn by, say the senior citizen, or on their mobile phone or other networked motion-sensing device in their proximity. The monitoring party would be able not only to view the physical state of the monitored party after E, whilst getting a simultaneous read-out of their relevant biometric data, but also to review the events and biometric data immediately leading up to the event trigger notification. Alternatively, it could be further calibrated so that although video is recorded, as before, in the n loop, no video from the n loop will actually be transmitted to a monitoring party until the occurrence of E. The advantages of this system include the respect of the privacy of the individual, where only the critical event and the time preceding the event would be available to a third party, resulting also in a desired optimization of both the necessary transmission bandwidth and the data storage requirements. It should be appreciated that the foregoing system could also be configured such that the E notification for remote senior, infant or patient monitoring is further adapted to include facial tracking and/or expression recognition features.

Privacy could be further improved for the user if their video data and biometric data are stored by themselves, on their own device, or on their own external, or own secure third-party “cloud” storage, but with the index metadata of the source material, which enables the sequencing, extrapolation, searching and general processing of the source data, remaining at a central server, such as, in the case of medical records for example, at a doctor’s office or other healthcare facility. Such a system would enable the monitoring party to have access to the video and other data at the time of consultation, but with the video etc. remaining in the possession of the subject. A further advantage of separating the hosting of the storage of the video and biometric source data from the treatment of the data, beyond enhancing the user’s privacy and their data security, is that by virtue of its storage locally with the subject, not having to upload it to the computational server results both in reduced cost and increased efficiency of storage and data bandwidth. This would be of benefit also where such kind of remote upload of tests for review by qualified medical staff at a different location from the subject are occurring in areas of lower-bandwidth network coverage. A choice can also be made to lower the frame rate of the video material, provided that this is made consistent with sampling rate to confirm the correct time stamp, as previously described.

It should be appreciated that with information being stored at the central server (or the host device), various techniques known in the art can be implemented to secure the information, and prevent unauthorized individuals or entities from accessing the information. Thus, for example, a user may be provided (or allowed to create) a user name, password, and/or any other identifying (or authenticating) information (e.g., a user biometric, a key fob, etc.), and the host device may be configured to use the identifying (or authenticating) information to grant access to the information (or a portion thereof). Similar security procedures can be implemented for third parties, such as medical providers, insurance companies, etc., to ensure that the information is only accessible by authorized individuals or entities. In certain embodiments, the authentication may allow access to all the stored data, or to only a portion of the stored data (e.g., a user authentication may allow access to personal information as well as stored video and/or biometric data, whereas a third party authentication may only allow access to stored video and/or biometric data). In other embodiments, the authentication is used to determine what services are available to an individual or entity logging into the host device, or the website. For example, visitors to the website (or non-subscribers) may only be able to synchronize video/audio data to biometric data and/or perform rudimentary searching or other processing, whereas a subscriber may be able to synchronize video/audio data to biometric data and/or perform more detailed searching or other processing (e.g., to create a highlight reel, etc.).

It should further be appreciated that while there are advantages to keeping just the index metadata at the central server in the interests of storage and data upload efficiency as well as so providing a common platform for the interoperability of the different data types and storing the video and/or audio data on the user’s own device (e.g., iCloud™, DropBox™, OneDrive™, etc.), the present invention is not so limited. Thus, in certain embodiments, where feasible, it may be beneficial to (1) store data (e.g., video, audio, biometric data, and metadata) on the user’s device (e.g., allowing the user device to operate independent of the host device), (2) store data (e.g., video, audio, biometric data, and metadata) on the central server (e.g., host device) (e.g., allowing the user to access the data from any network-enabled device), or (3) store a first portion (e.g., video and audio data) on the user’s device and store a second portion (e.g., biometric data and metadata) on the central server (e.g., host device) (e.g., allowing the user to only view the synchronized video/audio/biometric data when the user device is in communication with the host device, allowing the user to only search the biometric data (e.g., to create a “highlight reel”) or rank the biometric data (to identify and/or list data chronologically, magnitude (highest to lowest), magnitude (lowest to highest), best reviewed, worst reviewed, most viewed, least viewed, etc.) when the user device is in communication with the host device, etc.).

In another embodiment of the present invention, the functionality of the system is further (or alternatively) limited by the software operating on the user device and/or the host device. For example, the software operating on the user device may allow the user to play the video and/or audio data, but not to synchronize the video and/or audio data to the biometric data. This may be because the central server is used to store data critical to synchronization (time-stamp index, metadata, biometric data, sample rate, etc.) and/or software operating on the host device is necessary for synchronization. By way of another example, the software operating on the user device may allow the user to play the video and/or audio data, either alone or synchronized with the biometric data, but may not allow the user device (or may limit the user device’s ability) to search or otherwise extrapolate from, or process the biometric data to identify relevant portions (e.g., which may be used to create a “highlight reel” of the synchronized video/audio/biometric data) or to rank the biometric and/or video data. This may be because the central server is used to store data critical to search and/or rank the biometric data (biometric data, biometric metadata, etc.), and/or software necessary for searching (or performing advanced searching of) and/or ranking (or performing advanced ranking of) the biometric data.

In any or all of the above embodiments, the system could be further adapted to include password or other forms of authentication to enable secured access (or deny unauthorized access) to the data in either of one or both directions, such that the user requires permission to access the host, or the host to access the user’s data. Where interaction between the user and the monitoring party or host is occurring in real time such as in a secure video consult between patient and their medical practitioner or other medical staff, data could be exchanged and viewed through the establishment of a Virtual Private Network (VPN). The actual data (biometric, video, metadata index, etc.) can alternatively or further be encrypted both at the data source, for example at the individual’s storage, whether local or cloud-based, and/or at the monitoring reviewing party, for example at patient records at the medical facility, or at the host administration level.

In the context of very young infant monitoring, a critical and often unexplained problem is Sudden Infant Death Syndrome (SIDS). Whilst the incidences of SIDS are often unexplained, various devices attempt to prevent its occurrence. However, by combining the elements of the current system to include sensor devices in or near the baby’s crib to measure relevant biometric data including heart rate, sleep pattern, breath analyzer, and other measures such as ambient temperature, together with a recording device to capture movement, audible breathing, or lack thereof (i.e., silence) over a predefined period of time, the various parameters could be set in conjunction with the time-stamped video record, by the parent or other monitoring party, to provide a more comprehensive alert, to initiate a more timely action or intervention by the user, or indeed to decide that no action response would in fact be necessary. Additionally, in the case, for example, of a crib monitoring situation, the system could be so configured to develop from previous observation, with or without input from a monitoring party, a learning algorithm to help in discerning what is “normal,” what is false positive, or what might constitute an anomaly, and therefore a call to action.

The host application could also be configured to play video data that has been synchronized to biometric data, or search for the existence of certain biometric data. For example, as previously discussed, by video recording with sound a person sleeping, and synchronizing the recording with biometric data (e.g., sleep patterns, brain activity, snoring, breathing patterns, etc.), the biometric data can be searched to identify where certain measures such as sound levels, as measured for example in decibels, or periods of silences, exceed or drop below a threshold value, allowing the doctor, nurse, or medical technician to view the corresponding video portion without having to watch the entire video of the person sleeping.

Such a method is shown in FIG. 6, starting at step 700, where biometric data and time stamp data (e.g., start time, sample rate) is received (or linked) at step 702. Audio/video data and time stamp data (e.g., start time, etc.) is then received (or linked) at step 704. The time stamp data (from steps 702 and 704) is then used to synchronize the biometric data with the audio/video data. The user is then allowed to operate the audio/video at step 708. If the user selects play, then the audio/video is played at step 710. If the user selects search, then the user is allowed to search the biometric data at step 712. Finally, if the user selects stop, then the video is stopped at step 714.

It should be appreciated that the present invention is not limited to the steps shown in FIG. 6. For example, a method that allows a user to search for biometric data that meets at least one condition, play the corresponding portion of the video (or a portion just before the condition), and stop the video from playing after the biometric data no longer meets the at least one condition (or just after the biometric data non longer meets the condition) is within the spirit and scope of the present invention. By way of another example, if the method involves interacting between the user device and the host device to synchronize the video/audio data and the biometric data and/or search the biometric data, then the method may further involve the steps of uploading the biometric data and/or metadata to the host device (e.g., in this embodiment the video/audio data may be stored on the user device), and using the biometric data and/or metadata to create a time-stamp index for synchronization and/or to search the biometric data for relevant or meaningful data (e.g., data that exceeds a threshold, etc.). By way of yet another example, the method may not require step 706 if the audio/video data and the biometric data are played together (synchronized) in real-time, or at the time the data is being played (e.g., at step 710).

In one embodiment of the present invention, as shown in FIG. 8, the video data 800, which may also include audio data, starts at a time “T” and continues for a duration of “n.” The video data is preferably stored in memory (locally and/or remotely) and linked to other data, such as an identifier 802, start time 804, and duration 806. Such data ties the video data to at least a particular session, a particular start time, and identifies the duration of the video included therein. In one embodiment of the present invention, each session can include different activities. For example, a trip to a destination in Berlin, or following a specific itinerary on a particular day (session) may involve a bike ride through the city (first activity) and a walk through a park (second activity). Thus, as shown in FIG. 9, the identifier 802 may include both a session identifier 902, uniquely identifying the session via a globally unique identifier (GUID), and an activity identifier 904, uniquely identifying the activity via a globally unique identifier (GUID), where the session/activity relationship is that of a parent/child.

In one embodiment of the present invention, as shown in FIG. 10, the biometric data 1000 is stored in memory and linked to the identifier 802 and a sample rate “m” 1104. This allows the biometric data to be linked to video data upon playback. For example, if identifier 802 is one, start time 804 is 1:00 PM, video duration is one minute, and the sample rate 1104 is 30 spm, then the playing of the video at 2:00 PM would result in the first biometric value (biometric (1)) to be displayed (e.g., below the video, over the video, etc.) at 2:00 PM, the second biometric value (biometric (2)) to be displayed (e.g., below the video, over the video, etc.) two seconds later, and so on until the video ends at 2:01 PM. While self-realization data can be stored like biometric data (e.g., linked to a sample rate), if such data is only received periodically, it may be more advantageous to store this data 110 as shown in FIG. 11, i.e., linked to the identifier 802 and a time-stamp 1104, where “m” is either the time that the self-realization data 1100 was received or an offset between this time and the start time 804 (e.g., ten minutes and four seconds after the start time, etc.).

This can be seen, for example, in FIG. 14, where video data starts at time T, biometric data is sampled every two seconds (30 spm), and self-realization data is received at time T+3 (or three units past the start time). While the video 1402 is playing, a first biometric value 1404 is displayed at time T+1, first self-realization data 1406 is displayed at time T+2, and a second biometric value 1406 is displayed at time T+4. By storing data in this fashion, both video and non-video data can be stored separately from one another and synchronized in real-time, or at the time the video is being played. It should be appreciated that while separate storage of data may be advantageous for devices having minimal memory and/or processing power, the client platform may be configured to create new video data, or data that includes both video and non-video data displayed synchronously. Such a feature may advantageous in creating a highlight reel, which can then be shared using social media websites, such as Facebook™ or Youtube™, and played using standard playback software, such as Quicktime™. As discussed in greater detail below, a highlight reel may include various portions (or clips) of video data (e.g., when certain activity takes place, etc.) along with corresponding biometric data.

When sampled data is subsequently displayed, the client platform can be configured to display this data using certain extrapolation techniques. For example, in one embodiment of the present invention, as shown in FIG. 12, where a first biometric value 1202 is displayed at T+1, a second biometric value 1204 is displayed at T+2, and a third biometric value 1206 is displayed at T+3, biometric data can be displayed at non-sampled times using known extrapolation techniques, including linear and non-linear interpolation and all other extrapolation and/or interpolation techniques generally known to those skilled in the art. In another embodiment of the present invention, as shown in FIG. 13, the first biometric value 1202 remains on the display until the second biometric value 1204 is displayed, the second biometric value 1204 remains on the display until the third biometric value 1206 is displayed, and so on.

With respect to linking data to an identifier, which may be linked to other data (e.g., start time, sample rate, etc.), if the data is received in real-time, the data can be linked to the identifier (s) for the current session (and/or activity). However, when data is received after the fact (e.g., after a session has ended), there are several ways in which the data can be linked to a particular session and/or activity (or identifier (s) associated therewith). The data can be manually linked (e.g., by the user) or automatically linked via the application. With respect to the latter, this can be accomplished, for example, by comparing the duration of the received data (e.g., the video length) with the duration of the session and/or activity, by assuming that the received data is related to the most recent session and/or activity, or by analyzing data included within the received data. For example, in one embodiment, data included with the received data (e.g., metadata) may identify a time and/or location associated with the data, which can then be used to link the received data to the session and/or activity. In another embodiment, the computing device could display or play data (e.g., a barcode, such as a QR code, a sound, such as a repeating sequence of notes, etc.) that identifies the session and/or activity. An external video/audio recorder could record the identifying data (as displayed or played by the computing device) along with (e.g., before, after, or during) the user and/or his/her surroundings. The application could then search the video/audio data for identifying data, and use this data to link the video/audio data to a session and/or activity. The identifying portion of the video/audio data could then be deleted by the application if desired. In an alternate embodiment, a barcode (e.g., a QR code) could be printed on a physical device (e.g., a medical testing module, which may allow communication of medical data over a network (e.g., via a smart phone)) and used (as previously described) to synchronize video of the user using the device to data provided by the device. In the case of a medical testing module, the barcode printed on the module could be used to synchronize video of the testing to the test result provided by the module. In yet another embodiment, both the computing device and the external video/audio recorder are used to record video and/or audio of the user (e.g., the user stating “begin Berlin biking session,” etc.) and to use the user-provided data to link the video/audio data to a session and/or activity. For example, the computing device may be configured to link the user-provided data with a particular session and/or activity (e.g., one that is started, one that is about to start, one that just ended, etc.), and to use the user-provided data in the video/audio data to link the video/audio data to the particular session and/or activity.

In one embodiment of the present invention, the client platform (or application) is configured to operate on a smart phone or a tablet. The platform (either alone or together with software operating on the host device) may be configured to create a session, receive video and non-video data during the session, and playback video data together (synchronized) with non-video data. The platform may also allow a user to search for a session, search for certain video and/or non-video events, and/or create a highlight reel. FIGS. 15-29 show exemplary screen shots of such a platform.

For example, FIG. 15 shows an exemplary “sign in” screen 1500, allowing a user to sign into the application and have access to application-related, user-specific data, as stored on the computing device and/or the host computing device. The login may involve a user ID and password unique to the application, the company cloud, or a social service website, such as Facebook™.

Once the user is signed in, the user may be allowed to create a session via an exemplary “create session” screen 1600, as shown in FIG. 16. In creating a session, the user may be allowed to select a camera (e.g., internal to the computing device, external to the computing device (e.g., accessible via the Internet, connected to the computing device via a wired or wireless connection), etc.) that will be providing video data. Once a camera is selected, video data 1602 from the camera may be displayed on the screen. The user may also be allowed to select a biometric device (e.g., internal to the computing device, external to the computing device (e.g., accessible via the Internet, connected to the computing device via a wired or wireless connection), etc.) that will be providing biometric data. Once a biometric device is selected, biometric data 1604 from the biometric device may be displayed on the screen. The user can then start the session by clicking the “start session” button 1608. While the selection process is preferably performed before the session is started, the user may defer selection of the camera and/or biometric device until after the session is over. This allows the application to receive data that is not available in real-time, or is being provided by a device that is not yet connected to the computing device (e.g., an external camera that will be plugged into the computing device once the session is over).

It should be appreciated that in a preferred embodiment of the present invention, clicking the “start session” button 1608 not only starts a timer 1606 that indicates a current length of the session, but it triggers a start time that is stored in memory and linked to a globally unique identifier (GUID) for the session. By linking the video and biometric data to the GUID, and linking the GUID to the start time, the video and biometric data is also (by definition) linked to the start time. Other data, such as sample rate, can also be linked to the biometric data, either by linking the data to the biometric data, or linking the data to the GUID, which is in turn linked to the biometric data.

Either before the session is started, or after the session is over, the user may be allowed to enter a session name via an exemplary “session name” screen 1700, as shown in FIG. 17. Similarly, the user may also be allowed to enter a session description via an exemplary “session description” screen 1800, as shown in FIG. 18.

FIG. 19 shows an exemplary “session started” screen 1900, which is a screen that the user might see while the session is running. On this screen, the user may see the video data 1902 (if provided in real-time), the biometric data 1904 (if provided in real-time), and the current running time of the session 1906. If the user wishes to pause the session, the user can press the “pause session” button 1908, or if the user wishes to stop the session, the user can press the “stop session” button (not shown). By pressing the “stop session” button (not shown), the session is ended, and a stop time is stored in memory and linked to the session GUID. Alternatively, by pressing the “pause session” button 1908, a pause time (first pause time) is stored in memory and linked to the session GUID. Once paused, the session can then be resumed (e.g., by pressing the “resume session” button, not shown), which will result in a resume time (first resume time) to be stored in memory and linked to the session GUID. Regardless of whether a session is started and stopped (i.e., resulting in a single continuous video), or started, paused (any number of times), resumed (any number of times), and stopped (i.e., resulting in a plurality of video clips), for each start/pause time stored in memory, there should be a corresponding stop/resume time stored in memory.

Once a session has been stopped, it can be reviewed via an exemplary “review session” screen 2000, as shown in FIG. 20. In its simplest form, the review screen may playback video data linked to the session (e.g., either a single continuous video if the session does not include at least one pause/resume, multiple video clips played one after another if the session includes at least one pause/resume, or multiple video clips played together if the multiple video clips are related to one another (e.g., two videos (e.g., from different vantage points) of the user performing a particular activity, a first video of the user performing a particular activity while viewing a second video, such as a training video). If the user wants to see non-video data displayed along with the video data, the user can press the “show graph options” button 2022. By pressing this button, the user is presented with an exemplary “graph display option” screen 2100, as shown in FIG. 21. Here, the user can select data that he/she would like to see along with the video data, such as biometric data (e.g., heart rate, heart rate variance, user speed, etc.), environmental data (e.g., temperature, altitude, GPS, etc.), or self-realization data (e.g., how the user felt during the session). FIG. 22 shows an exemplary “review session” screen 2000 that includes both video data 2202 and biometric data, which may be shown in graph form 2204 or written form 2206. If more than one individual can be seen in the video, the application may be configured to show biometric data on each individual, either at one time, or as selected by the user (e.g., allowing the user to view biometric data on a first individual by selecting the first individual, allowing the user to view biometric data on a second individual by selecting the second individual, etc.).

FIG. 23 shows an exemplary “map” screen 2300, which may be used to show GPS data to the user. Alternatively, GPS data can be presented together with the video data (e.g., below the video data, over the video data, etc.). An exemplary “summary” screen 2400 of the session may also be presented to the user (see FIG. 24), displaying session information such as session name, session description, various metrics, etc.

By storing video and non-video data separately, the data can easily be searched. For example, FIG. 25 shows an exemplary “biometric search” screen 2500, where a user can search for a particular biometric value or range (i.e., a biometric event). By way of example, the user may want to jump to a point in the session where their heart rate is between 95 and 105 beats-per-minute (bpm). FIG. 26 shows an exemplary “first result” screen 2600 where the user’s heart rate is at 100.46 bmp twenty minutes and forty-two seconds into the session (see, e.g., 2608). FIG. 27 shows an exemplary “second result” screen 2700 where the user’s heart rate is at 100.48 bmp twenty-three minutes and forty-eight seconds into the session (see, e.g., 2708). It should be appreciated that other events can be searched for in a session, including video events and self-realization events.

Not only can data within a session be searched, but so too can data from multiple sessions. For example, FIG. 28 shows an exemplary “session search” screen 2800, where a user can enter particular search criteria, including session date, session length, biometric events, video event, self-realization event, etc. FIG. 29 shows an exemplary “list” screen 2900, showing sessions that meet the entered criteria.

The foregoing description of a system and method for using, processing, and displaying biometric data, or a resultant thereof, has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teachings. Those skilled in the art will appreciate that there are a number of ways to implement the foregoing features, and that the present invention it not limited to any particular way of implementing these features. The invention is solely defined by the following claims.

More revelations soon!


We are funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media deliver more, better, faster, please donate here, anything helps. Thank you!

! Articles can always be subject of later editing as a way of perfecting them