• Home
  • >
  • Blog
  • >
  • A Reemerging Threat in the Age of COVID-19 & Remote Healthcare

August 19, 2020

A Reemerging Threat in the Age of COVID-19 & Remote Healthcare

Hacking With Worldwide Implications

Written by Carl J. Byron, CCS, CHA, CIFHA, CPC, ICDT-CM, OHCC, ATC (Ret)


As HIPAA Privacy & Security Officers struggle to secure data due to the increase in our remote workforce, an important potential hack could be looming for unaware organizations.  Healthcare IT departments secure our computers and train us in the rules governing what can be accessed in effort to enforce HIPAA security. Nevertheless, are you unintentionally posing risk to the security of your company’s data? 

This article is for all Covered Entities and Business Associates with a remote workforce or with providers accessing patient data remote on a mobile device and smaller organizations new to remote workers due to COVID-19.

As Healthcare Technology advances, so do the skills of hackers!  People are trying to find ways to stay healthy during the pandemic.  What is one hack likely to occur that may escape our HIPAA Security Officer’s attention?  The wearing of activity trackers (also known as FitBits, smartwatches, and other wireless-enables wearable technology devices). According to USA Today, Smartwatch sales soared during the start of the Coronavirus crisis with Apple selling more than 3 times as many devices as its competitors (https://www.usatoday.com/story/tech/2020/05/07/smartwatch-shipments-rise-20-despite-pandemic-sales-led-apple/3086904001/).  

Think I watch too many crime dramas?

Unfortunately hackers are evolving at pace with technology and even the most innocuous and personal technology can be, and has been, hacked. This includes the immensely popular Fitbit. Like the personal cell phone most employees believe these devices are innocuous and will not be harmful if they get plugged in to the work computer. In this article I will cover a threat that faces the immensely popular personal health monitoring device. But be aware, this applies to any wearable technology device.

The Fitbit continues to show how personal devices can be hacked, and proprietary medical information can be extracted and malicious software remotely downloaded. Even information that does not seem to show you the truth is far more concerning than crime dramas.

In a November 14, 2018 article, Jennifer Falsetti wrote how the FitBit can be used, in this case by law enforcement!  At that time, in 2018 in Oklahoma there was a case of the missing jogger, Mollie Tibbetts. Tibbets went out for a jog, and never returned. Law enforcement desperately searched for her and Mollie was found not far from the road she frequently ran. The way investigators searched for Mollie was to the Fitbit tracker she was wearing at the time. Although cell phones had been used as location devices for years, the Fitbit added an element that gained rapid national attention. When  interviewed by Falsetti, Midwest City Police Chief Brandon Clabes had this to say. "These things have grown in the past few years where it's something we use quite often in our investigations. Both criminally and missing persons, these Fitbits have tracking devices inside which really is a safety factor for the individual that wears them because it tells us exactly where you are, and what time, and what place and it gives us the information to determine, especially on a missing persons case trying to locate you like the Mollie Tibbets case. It also helps us investigate crimes because people will tell us one thing but we can verify through GPS, through their Fitbit," said Clabes. 

So how does this apply to HIPAA security?  Let me explain. The tracking capacity of the Fitbit answers many important, and valuable questions for healthcare hackers using the same technology that helped find Mollie Tibbetts: "When do they see their doctor? Where do they see their doctor? When are they home? When are they not home? What pharmacy do they use? What are they doing? Who else are they with while they're not at home?

So, hackers can track information through your personal health device, but hackers can also feed malicious software into the device, and when plugged into any computer the malware will be downloaded. 

The wearable device directly communicates with your phone. It's unsecure, and if you're within range of the device or multiple devices like it and you know what you're doing, you can start surveying and see who is out there. Although the range for this info-grab is only 10-30 feet or so, think of how many times you are in an area with multiple people within these parameters. Even during these initial reopening phases of COVID-19 in a given day there can be many. How many do you know? And are those people with their faces buried in their phones as innocent as they seem?

A real life example of the ability to transfer malicious data was clearly displayed a few years back. Darlene Storm, regular writer for COMPUTERWORLD magazine in her column SECURITY IS SEXY October 26, 2015 wrote about an astounding act of research that caused concern in the security community and an aggressive counterattack by Fitbit. The event took place in 2015:

  • At the Hack.Lu 2015 security conference in Luxembourg, Fortinet researcher Axelle Apvrille presented a proof-of-concept vulnerability in Fitbit Flex fitness trackers; an attacker in close range needed only 10 seconds to wirelessly inject malicious code into a Fitbit Flex wristband via a Bluetooth connection. 
  • The foreign code could persist and then infect a PC or other devices to which the Fitbit Flex connects. 10 seconds. The portion which really arrested the attention of security-conscious audiences was when Apvrille demonstrated how Flex could be infected via Bluetooth. Granted, while the maximum bytes of foreign code to infect Fitbit are only 17, she pointed out that the Trojan capable of crashing Pentium in 1997 (the “FOOF bug”) was a mere four bytes and the Mini DOS virus was only 13 bytes.

So what if the Bluetooth is the part infected? The Fitbit is remote and although hooked to the phone via Bluetooth, isn’t the threat contained? No. One of the scariest extensions of Apvrille’s results was that by infecting through Bluetooth, when the Fitbit gets plugged in to a computer, the computer gets the infection as well. At this point no one is likely to believe their information would be worth hacking. In a rare instance that may be true but consider: what if you have a medical condition and you are reporting data to your doctor based on Fitbit? And to keep it current you will have to plug it into your computer regularly. And maybe your information may not be worth stealing or selling; but what about the President of the United States? As of this article’s publication President Barack Obama wore a Fitbit Surge for 8 months (and he was noted to use his personal phone often). Fortinet, Aprville’s sponsor, explained three steps which would take the problem from research to active attack; two of which were conclusively proven:

“There are three steps to seeing this go from ‘proof of concept’ to a problem in the wild:

  1.  Upload malicious code to any Fitbit wristband in close range.
  2. Automatically transmit the code from the Fitbit wristband to any computer that connects to it (via the Fitbit dongle).
  3. Have the code be executed by the connected computer.

*Fortinet researchers demonstrated and verified steps 1 and 2. Step 3 would rely on exploiting a vulnerability in the computer to which the Fitbit wristband was synced, which was out of the scope of our research. To date, we are not aware of an exploit that would enable this third step, nor did we actively look for one. However, we would caution against working under the assumption there is no such exploit possible, now or in the future.”

Despite the evidence Fitbit vehemently denied there was any vulnerability in their product, and my research to date has not located anything indicating Fitbit has changed its position. In addition, hackers attacked Fitbit itself. Cybercriminals used leaked email addresses and passwords from third-party sites to log into accounts of Fitbit wearable device users. Fitbit confirmed that once inside the accounts, the attackers changed details and attempted to defraud the company by ordering replacement items under the user's warranty. The attackers also reportedly had access to customer data, including GPS history, which shows where a person regularly runs or cycles, as well as data showing what time a person usually goes to sleep.

A January 2018 article in Hackaday reported how Strava, a well known data monitoring service, followed Fitbit users’ data; in part monitoring GPS data. Even though Strava had a sound Privacy policy, the heatmap they built based on GPS data built visualizations using over 6 trillion data points and could be assembled into a fascinating gallery, but there was a downside. The weekend this article appeared an announcement on Twitter reported Strava’s heatmap also managed to highlight exercise activity by military/intelligence personnel around the world, including some suspected but unannounced facilities. Additionally, some mapped paths imply patrol and supply routes, knowledge security officers would prefer not to be shared with the entire world. What this glaringly brought to light was Strava’s redacted data sharing did not identify any individuals: but did or could not do the same for groups of individuals like active duty military personnel whose exercise regimens are clearly defined on these heat maps. 

The biggest contributor (besides wearing a tracking device in general) to this situation is that data sharing is enabled by default and must be opted-out. This finding and report stands in a class of its own. That Fitbits can be hacked has been recognized for some time but to date the hacks have been localized. That this kind of data can be discovered globally caused enormous concern throughout every industry that demands privacy, confidentiality and seeks to safely compartmentalize proprietary data. I watched local news stations report on this event, and the article can be found at https://hackaday.com/tag/fitbit/.

So, as a loyal workforce member, think about your healthcare organization; think about the daily, weekly, monthly, etc. volume and flow of electronic data. Now think of someone like The Centers for Medicare and Medicaid Services (CMS), Blue Cross and Blue Shield and even your state’s Worker Compensation groups. When an experienced hacker trawls the Internet for healthcare data and information the “hook” is bound to find its mark somewhere, and a breach is inevitable. This is where your IT security teams truly start their work.

Security teams, as a rule, have far too many systems and too much tech landscaping to protect from hacks. They will use automated systems such as IPS (Intrusion Prevention Systems) and IDS (Intrusion Detection Systems) takeover to limit the damage. Because attacks and probes happen at very high volumes, manual reaction and protection cannot be accomplished individually. Human intervention typically comes after the threat has been identified. Determining the threat, point of entry, potential or caused damage, threat risk assessment and how to close the vulnerability so it cannot be reused and building effective firewalls are the tasks of the manpower part of the tech security equation.

The security team will spend hours poring over documents, programs, systems, protocols, laws and, especially in health care, security breach reporting algorithms. See why TV hates reality now? Both sides in reality spend hours in very labor intensive work. The fastest part is done before and after the fact, automatically.

We all know that there are instances where hackers and security do battle in real time: and they are fascinating even if you do not understand everything they do. They’re called Capture The Flag (CTF) games. People do NOT know anything-television shows try to convince them these battles occur in real time. This entire section is showing examples where these battles do happen: but they are closely monitored and security is tight.  

CTFs are conducted in one of two ways. One, there is a Red Team contest where the hackers are given systems with no active defenses. The Red Team works against a set of protections they are given before the contest. The more familiar contest pits the Red Team (hackers) against the Blue Team (security forces). As one would anticipate, the Red team scores points for successful hacks and penetrations; and the Blue Team scores points through successful deflections of the attacks and securing/closing of discovered vulnerabilities.

Why add this last story? It may be interesting but what does it have to do with hacking Fitbits? To demonstrate both the hackers and security forces are constantly evolving but the technology is there and has been successfully exploited for some time. When that hacker sits near you in the coffee shop, many person-hours have gone into trawling the Fitbit or smartwatch, its defaults, capabilities and vulnerabilities. Although the articles used here date back to 2015, be aware these problems are still considered to be real threats today and strict security precautions are the norm at most organizations. 


Never plug your phone or personal device into your work computer to recharge.  Notify your organization’s HIPAA Officer if wearing a FitBit, Smartwatch or other personal device which could pose a threat to the security of your organization’s system, even if you are working remote from home on your own computer which VPNs into the company’s systems.

Follow your organization’s HIPAA policies and procedures at work, at home or working anywhere remote.

Still not convinced that hacking is a problem?  I encourage you, I dare you to visit the U.S. Department of Health and Human Services Office for Civil Rights “Wall of Shame” at https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf and just look only at cases under investigation the volume just under the cause “Hacking/IT Incident” would be difficult to tally, there are so many. 

Look at the Health Plans, small and large providers as well as Business Associates listed on the Wall of Shame due to large breaches under investigation. 

The problem is current, it is relevant and with the ever increasing popularity of individual electronic devices coupled with great numbers of healthcare employees now working, basically unsupervised, from home the vulnerabilities of these devices will continue to rise and be exploited.




Verified by MonsterInsights