In Healthcare, High Tech Gets a Taste of Its Own Disruption

Image: Kentoh/Shutterstock

Over the last five years, nearly all of the major information technology companies have launched some sort of healthcare initiative. The COVID pandemic, with its surging demand for telemedicine and rapid exchange of medical data, only intensifed that trend.

But some tech giants have learned the hard way that healthcare is very different from other business sectors.

High-profile failures like the demise of Haven — a nonprofit healthcare organization formed by Amazon in partnership with JP Morgan Chase and Berkshire Hathaway — are cautionary tales, showing that the creative disruptions that proved so successful in retail, transportation, and financial services can crash and burn in the medical sphere.

Successful or not, Big Tech’s healthcare ventures raise profound ethical questions about the uses of personal medical data and the limits of privacy in the digital age.

Haven & Hell

Haven was founded on the lofty goals of improving healthcare services and lowering medical costs for employees of the three founding companies, an aggregate workforce of roughly one million people.

The venture started out strong in 2018, but proved to be short-lived. By early 2021, barely three years after its launch, Haven posted a vague statement on its now defunct website announcing plans to disband.

Many healthcare analysts have opined on why Haven hit the rocks. CNBC reported that despite the outward appearance of collaboration, “each of the three founding companies executed their own projects separately with their own employees, obviating the need for the joint venture to begin with.”

Other speculations cite sluggish progress, high executive turnover, lack of “bold ideas,” absence of “strategic clarity,” and the company’s status as a not-for-profit rather than a profit-driven venture.

But it might also be that Haven’s leaders simply overestimated their own strengths and underestimated the depth of healthcare’s quicksand. The hiring of a few medical superstars like surgeon/author Atul Gawande, MD, and former ZocDoc chief technology officer, Serkan Kutan, was not enough to transform an online retailer, an investment bank, and a corporate holding company into a healthcare business.

“A real possibility is that the entrenched complexity of the American healthcare business model proved too daunting to change. As large as these parent organizations are, they still don’t possess the economies of scale to tip the balance when it comes to healthcare,” said Lyndean Brick, president of healthcare consulting firm Advis, in an article published by Fierce Healthcare.

Many healthcare analysts have opined on why Haven hit the rocks. Speculations cite sluggish progress, high executive turnover, lack of “bold ideas,” absence of “strategic clarity,” and the company’s status as a not-for-profit. But it might also be that Haven’s leaders simply overestimated their own strengths and underestimated the depth of healthcare’s quicksand.

Moving forward, Amazon, Berkshire Hathaway, and JPMorgan Chase have stated—in perfect corporate-speak–that they will “leverage the insights” gained from Haven, and will “collaborate informally to design programs tailored to address the specific needs of their own employee populations.”

Healthcare on Demand

Amazon seems undaunted by the Haven debacle; the company has forged on with several independent healthcare ventures.

Earlier this year, it confirmed plans to broaden Amazon Care–a virtual healthcare service previously available only to Amazon employees—to other big employers.

Amazon Care was piloted in 2019 as a benefit for Amazon workers and their families in the company’s home state of Washington. This Summer, it extended the virtual service to all its US employees.

Through a smartphone app, Amazon Care offers employees “immediate access to high-quality medical care and advice,” including on-demand connections to primary and urgent care services, 24 hours a day, 365 days a year.

Successful or not, Big Tech’s healthcare ventures raise profound ethical questions about the uses of personal medical data and the limits of privacy in the digital age.

Aspects of Amazon Care resemble other well-established telehealth services. To begin using the app, patients answer questions about their physical and mental health. Users can then schedule medical visits and chat with nurses or doctors via video or messaging features. Amazon claims employees are typically able to connect with medical professionals in less than 60 seconds.

Image: Mundissima/Shutterstock

Along with virtual care, the system also provides options for in-person services. Patients can request home or office visits for COVID-19 and flu testing, vaccinations, illness and injury treatment, preventive care, routine blood draws, sexual health services, prescription requests, refills, deliveries, and more.

Amazon ambitiously aims to bring its virtual and in-home healthcare platforms to at least 20 major US cities in 2021 and 2022. The company also plans to expand Amazon Care beyond its own employee base by “supplying” it as a workplace benefit to other companies.

The paradox is that as Amazon strives to be a healthcare leader, it simultaneously faces accusations of unsafe working conditions and employee abuse.

During the “heat dome” weather event last summer, indoor temperatures at an Amazon warehouse in Washington state allegedly reached 90 degrees. Despite the extreme heat and lack of adequate climate control, workers were asked to move as quickly as possible to “boost productivity,” the Seattle Times reported. This follows many prior charges from Amazon employees who’ve described unhealthy working conditions while being treated “like robots.”

Beyond Amazon Care

Amazon Care is not the tech titan’s only foray into healthcare. Its most recent is a direct-to-consumer at-home COVID-19 test kit.

Last Spring, the Food and Drug Administration issued an Emergency Use Authorization for Amazon’s real-time RT-PCR test for SARS-CoV-2. Customers can now order these Amazon COVID-19 test kits through the company’s diagnostics site, AmazonDx.com, by providing the same login credentials used to access its online shopping site. Amazon hinted at further moves into diagnostics, including tests for respiratory infections and sexually transmitted diseases, Business Insider recently reported.

Last June, Amazon Web Services (AWS) unveiled the AWS Healthcare Accelerator, a four-week “technical, business, and mentorship accelerator opportunity” for fledgling digital health companies.

The paradox is that as Amazon strives to be a healthcare leader, it simultaneously faces accusations of unsafe working conditions and employee abuse.

According to a blog post from Sandy Carter, AWS’ vice president of worldwide public sector partners and programs, the Accelerator is specifically focused on startups seeking to “accelerate growth in the cloud.” The program is now accepting applications from up-and-comers looking to leverage Amazon’s technical and commercial expertise “to help solve the biggest challenges in the healthcare industry.”

The Accelerator will focus on “solutions” — a favorite tech industry term — such as remote patient monitoring, data analytics, patient engagement, voice recognition, and virtual care. By using AWS, Amazon says “organizations can increase the pace of innovation, unlock the potential of data, and personalize the healthcare journey.”

Data Fuels Deep Learning

But what exactly is the “potential” of data? And who does it really benefit?

Among the more revolutionary developments in IT are data-driven “smart” machines, which use artificial intelligence to “think” and act like humans.

One branch of AI, called deep learning, relies on algorithms that mimic the neural networks of the human brain. Like the brain, AI-based machines continually receive and process large amounts of data, and teach themselves new tasks based on what they glean.

People across the globe generate huge amounts of data every day, all of which — when captured — fuels the deep learning process. The more information the algorithms take in, the better they become at predicting outcomes — including things like human behaviors, disease progression, and treatment responses.

Image: Ryzhi/Shutterstock

Most of our digital gadgets now rely on machine learning to some extent. Virtual assistants like Amazon’s Alexa or Apple’s Siri use AI to decipher users’ unique speech and language patterns. Driverless AI-guided vehicles learn not only the mechanics of driving, but how to respond spontaneously to unexpected obstacles or occurrences.

Whether or not users realize it, social media platforms like Facebook, Twitter, and Instagram continuously collect and analyze personal behavioral data. Our data is, essentially, a form of payment to access these “free” online services. AI-based programs glean information about our favored activities, habits, aesthetic preferences, socioeconomic status, political inclinations, and our health.

Presently, the US does not have comprehensive consumer data privacy regulations governing what can and cannot be done with data that users enter into online apps. Instead, we have a patchwork of inconsistent state-level regulations.

Amazon, with its diverse healthcare operations, are no doubt amassing astonishing amounts of medical information. So are most of the high-tech healthcare companies.

Big Promises, Big Problems

Proponents of AI present it as a powerful problem-solver that helps us overcome the limits of our own brains. Because machines can process much larger quantities of information than can the human brain, AI expands our ability to think creatively and develop novel solutions to serious problems.

Tech companies say they use AI to enhance the “user experience.” In healthcare, the rationale is usually “to improve clinical outcomes,” to “reduce complexity,” to “streamline the patient experience” or some similar pastel-toned platitude.

And there’s no question that AI does hold great promise for improved patient care. One of the most exciting medical applications is in the detection of cancer.

An article published last year in Nature describes an AI technology “capable of surpassing human experts in breast cancer prediction” (McKinney, S et al. Nature. 2020; 577: 89–94). The AI system’s ability to interpret mammograms outperformed humans in detecting both false-positive and false-negative results.

Researchers have also used artificial neural networks to predict the likelihood of recurrence within 10 years following breast cancer surgery. One group posits that accurate predictions by machine learning algorithms “may improve precision in managing patients” post-surgery and increase the understanding of risk factors for breast cancer recurrence (Lou, S et al. Cancers (Basel). 2020; 12(12): 3817).

But the price for AI-enhanced healthcare is loss of privacy.

Flow of Information

Take, for instance, AI-driven menstrual trackers. The popular menstrual app, Flo, which is used by an estimated 100 million women, offers what claims to be “the most precise AI-based period and ovulation predictions by tracking 70+ body signals like cramps, discharge, headaches and more.”

Flo users enter a host of health details related to their menstrual cycles and sexual activity, and the app tracks period symptoms and fertility patterns. The company says it helps users determine whether their cycles are “normal” or “irregular.”

Similarly, Apple iPhones contain a built-in app whose features include menstrual cycle tracking. Apple says the tracker will “improve predictions for your period and fertility window.”

While menstrual tracking apps might offer benefits like helping couples to either achieve or avoid pregnancy, they also raise significant safety and privacy concerns. In exchange for period-tracking services, patients provide the tech companies — and potentially others — with a wealth of very personal information.

The price for AI-enhanced healthcare is loss of privacy

Researchers have questioned both the reliability of fertility apps and the unregulated nature of the market (Ali, R et al. Repro BioMed Online. 2021; 42(1) 273-281). One study from 2020 looked at 140 different menstrual apps and concluded that while some are accurate and useful, “many more are of low quality, and users should be wary of relying on their predictions to avoid pregnancy or to maximize chances of conception.”

Some menstrual trackers, including Flo, have significant data privacy flaws and they have been called out for sharing user data despite promises to keep that information private. In 2019, a Wall Street Journal investigation revealed that the app shared users’ intimate reproductive and sexual health data with Facebook, Google, and other third parties that provided marketing and analytics services for Flo.

According to Consumer Reports, analysis of privacy and data security practices of Flo and four other period trackers showed that users receive no guarantee that their data would not be shared with third parties for marketing or other purposes.

Following a 2020 complaint, the Federal Trade Commission announced last January that it had reached a settlement with Flo. Among other requirements, it mandates that the company must obtain users’ consent before sharing their health information.

Lack of Oversight

Presently, the US does not have comprehensive consumer data privacy regulations governing what can and cannot be done with data that users enter into online apps. Instead, we have a patchwork of inconsistent state-level regulations.

There are data privacy laws governing specific sectors, including healthcare. The best known is the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Other laws also regulate electronic health records systems and telehealth technologies.

But in the post-COVID world, many of HIPAA’s protections have been curtailed.

Early in the pandemic, the Department of Health and Human Services (HHS) — the entity charged with enforcing HIPAA — announced a temporary relaxing of regulations with which healthcare practitioners formerly had to comply. HHS saw the massive need for accessible telehealth services, and rightly recognized that in this public health emergency, HIPAA compliance was an obstacle to the speedy expansion of telemedicine.

The agency acknowledged that “some of these technologies, and the manner in which they are used by HIPAA-covered healthcare providers, may not fully comply with the requirements of the HIPAA Rules.” Therefore, HHS chose to “exercise its enforcement discretion” by refraining from “impos[ing] penalties for noncompliance with the regulatory requirements under the HIPAA Rules…in connection with the good faith provision of telehealth during the COVID-19…emergency.”

Many telehealth platforms, remote monitoring tools, and health apps are not really HIPAA-compliant. And nearly two years on, the “temporary” HIPAA suspensions are starting to feel permanent.

Hackers Target Health Data

There’s no question that sensitive medical data gathered by health apps, telemedicine platforms, and EMR systems is vulnerable to both marketing misuse and cyberattacks.

High-profile hacks targeting government agencies and private companies alike regularly make news headlines. Healthcare organizations, from hospitals to health plans to practitioners themselves can be targets of cybercrime — a threat that’s escalated significantly in the past year.

HHS indicates a 25% increase in healthcare data breaches in 2020 over 2019, itself a record-breaking year. In 2020, the department recorded the highest total of large health data breaches –1.76 incidents per day on average–in any year since it began tracking the problem in 2009. Officials have warned medical groups of a rise in hacking attempts during COVID, in part stemming from cybercriminals’ efforts to steal vaccine-related research and other medical data.

Aware of these threats, some politicians are advocating for tighter federal data privacy and security laws. US Representative Suzan DelBene (D-WA) introduced legislation last March that, if passed, would create the country’s first-ever national data privacy standards.

The Information Transparency and Personal Data Control Act seeks to protect individuals’ personal identifying information, including data relating to financial, health, genetic, biometric, geolocation, sexual orientation, citizenship and immigration status, Social Security Numbers, and religious beliefs. It also protects all information pertaining to children under age 13. Further, it would strengthen the Federal Trade Commission’s capacity to enforce privacy rights and punish bad actors.

That’s good as far as it goes. But, notably, the bill does not mention either AI or facial recognition technologies, or the ways in which they capture consumer data.

Meanwhile, the use of these technologies—and in some cases, their exploitation–is only accelerating within and beyond the healthcare realm.

AI-driven systems do indeed hold tremendous potential to transform medical care for the better. But these advances do have downsides—both predictable and unexpected. It is naïve to think otherwise.

END

 
Subscribe to Holistic Primary Care