The Forescout Cyber Roundup is a weekly blog series highlighting some of the previous week’s cyber headlines and explaining why they matter. Each article includes a closer look at the potential implications of the news or event, predictions about what might happen next and suggestions for all readers, from the C-suite to end users. Articles are ordered by date, not necessarily priority.
- DHS Finds Encryption Vulnerabilities in Medtronic Programmers (December 17, 2018)
- CENTCOM Chief: The Future of Warfare Demands More Cyber Authorities (December 18, 2018)
- Memes Posted to Twitter Have Been Coded to Talk to Malware (December 17, 2018)
- New Cybersecurity Standard for Self-Driving Vehicles (December 19, 2018)
- Microsoft Issues Emergency Update to Fix Critical IE flaw under Active Exploit (December 19, 2018)
Summary: Three versions of the medical device vendor’s programmers fail to encrypt the protected health information or personally identifiable data stored on the device.
Why it matters: The Department ofHomeland Security’s Industrial Control Systems Cyber Emergency Response Team (DHS ICS-CERT) recently discovered encryption vulnerabilities in three versions of medical device programmers—portable computer systems used to program and manage cardiac devices in clinics and during implant. The medical device company was criticized earlier this year for a lax response to a similar vulnerability, and recently disabled pacemaker programmer updates for more than 34,000 global programmers. Medical devices such as these programmers are often designed with a focus on efficiency, convenience and continuity of care, not necessarily security. When it comes to confidentiality, integrity, and availability—the CIA triad —one ‘leg’ of the triangle may be compromised. In Financial Services, integrity is typically most important, while in Operational Technology (OT), availability usually takes priority. In this particular example, physicians could access the data from the device during a patient visit in seconds, as opposed to the minutes required with alternative methods such as save to disk or manual entry—sacrificing confidentiality to improve availability. There have been a number of incredible advances in medical devices in recent years, but advances in healthcare security and the protection of protected health information (PHI) and personally identifiable information (PII) simply haven’t kept the same pace. We predicted that not only will major PHI breaches continue, but the attacks will get more personal and creative. In this example, a malicious actor could gain access to the programmers, harvest the data from the device, then potentially gain access to the patient’s Electronic Health Record (EHR) System, siphoning off even more PHI. The U.S. Food and Drug Administration (FDA) recently updated its 2014 cybersecurity guidance for medical device premarket submissions, but as breaches linked to medical device vulnerabilities continue and 30% of online health databases expose patient data, patients may start asking more questions about the security of the devices than the state of their physical health. Encryption of sensitive patient data is the most obvious way to improve medical device security, but poorly executed encryption is hardly more helpful than none at all. And, many could argue that proper encryption is among the basics when it comes to cybersecurity best practices. There are many other vulnerabilities that pose a direct threat to human life—not just the compromise of data—such as the latest variation on pacemaker malware.
Summary: The Pentagon has received more power to conduct cyber operations in the past 18 months. But for the top Army commander in the Middle East and Central Asia, the new authority is not enough.
Why it matters: Today’s warfare is complicated but—to an extent—can be summed up in the roof scene from ‘In the Line of Fire’ with Clint Eastwood, John Malkovich and Dylan McDermott—multiple players in theatre, each with his own allegiances and incentives. In terms of physical military action and response, it has historically been a matter of checks and balances and an ultimate phone call to the President for wartime permission to take physical action and cause physical damage. In the case of an extreme example, like a nuclear attack, there’s a pretty extensive process. However, day-to-day military operations have traditionally required rapid decision making and swift action; failure to act in a timely manner can mean the difference between victory and defeat, casualties and safe stateside return. What we see in this article is that Pentagon officials have realized the need for an equally rapid response when it comes to cyber. The speed of cyber has dramatically altered the speed of war. The pace of change that we’ve seen in the consumer space is the same pace that’s needed for military advantage in the offensive and defensive space. ‘Click and load’ has been replaced with the ‘flick and scroll’ for consumers. For the military, and for our nation’s security, that same level of speed must be normalized in daily operations.
Summary: Memes—those funny images and text that spread like wildfire on social media—are now being used by hackers to execute commands on your computer.
Why it matters: Richard Dawkins is credited with coining the phrase ‘meme’ in the 70s. Since then, the term has evolved to encapsulate, mostly, those square photos on various social media platforms. ‘Memeing’ has become so popular that some savvy social media enthusiasts have turned a fun hobby into a full time job, and some companies, such as Tumblr, even have a full-time Meme Librarian and Reddit has its own meme economy. Often categorized as humorous, memes can actually shed light on cultural, social and political trends. Memes aren’t just popular, they are seemingly endless in number. One meme maker alone claimed their users cranked out 2,000 memes a day—and that’s just one maker out of hundreds. While the meme craze is fairly new, the idea of steganography—the technique bad actors are using in memes—dates to ancient Greece around 440 B.C. The Greeks would shave the head of a messenger, tattoo a message, then wait for the hair to grow back before delivering a message. Invisible ink was another popular method used during the American Revolutionary War. In the digital age, methods such as Least Significant Bit Substitution (LSBS) are employed to discretely add characters and messages to images. As we predicted, attacks are going to get more creative in 2019. Leveraging social media platforms and what is arguably the most popular type of content on those platforms is a creative method; and, when executed in tandem with social engineering tactics and a sophisticated phishing campaign, thoughtless social media scrolling may become a very effective vector for hackers. This story covers only one example of media parsing command injection vulnerabilities—converting a stream that is supposed to be a data payload into what is instead executed as code—but the reality is that this type of vulnerability spans across all media formats: flash, media players, browser extensions, plugins, even GIF and JPEG rendering engines. And, it’s hard to build parsers when we trust the data, yet dangerous habits—like exposing a browser to external media streams—make it that much easier for bad actors to leverage watering holes. Whether attackers utilize memes or another means, it’s important to realize that the vast majority of such attacks could be thwarted by focusing not just on the technical risks, but human tendencies and weaknesses as well.
Summary: New guidance was released in the UK this week to help auto industry leaders develop cyber guidance and protect self-driving cars from hacking; this new standard is the first of its type.
Why it matters: The British government released key principles for automated vehicles last year, and now the British Standards Institute has released fundamental principles of automotive cybersecurity. Earlier this year, U.S. transportation regulators grappled with tough questions around self-driving vehicle security—should police be able to disable such vehicles, would automatic accident alerts violate an occupant’s privacy? Though the concept is aimed at creating a safer and more convenient world, the questions far outweigh the answers at this point—and that’s mostly because of complex issues around the ability of self-driving cars to execute judgement, not just logic. For example, what happens when two self-driving cars about to pass encounter an unforeseen obstacle—a flash flood, a deer darting across the road, or an older model vehicle taking an illegal turn at the last second. If one vehicle has a family with three children onboard, but the other has a single driver, do the algorithms take passengers into account before taking evasive maneuvers? And, when there’s an accident, can the ‘driver’ of the self-driving ever really be at fault? The scenarios and possible questions are endless. But, those are only the questions around logic and physical security. There are also a host of questions that remain unanswered when it comes to the cyber resilience and security of self-driving vehicles. Some have claimed that the vehicles are less hackable than most think despite cases where vehicles have been hacked and killed on the highway. Tesla has led the charge in automated vehicles, and even made its patents publicly available to encourage healthy competition and innovation. We can expect a few things in the foreseeable future: the massive risk associated with an autonomous vehicle debacle will drive manufacturers to invest heavily in getting it right, but classic market competition will drive innovation. Regulation will lag behind the first rollout and there will inevitably be an accident that results in major setbacks, rollbacks and recalls. Despite the fact that multiple manufacturers are pushing hard to get autonomous vehicles on the road, only 21% of adult Internet users said they are likely to ride in a self-driving car. The public sentiment toward autonomous vehicles doesn’t support the traditional IT adoption curve—early adopters, followed by early and late majority, and those who lag, finally ceding to the new tech only because their old tech has reached end of life. It’s likely that such a curve won’t apply to the new tech offered by autonomous vehicles, simply because there’s a lot more on the line for buyers and manufacturers alike.
Summary: Following last week’s Patch Tuesday, Microsoft issued an emergency update to address an Internet Explorer (IE) vulnerability.
Why it matters: In last week’s roundup, we explained that Patch Tuesday isn’t just an alert for users and system administrators—bad actors are also eager to act on the identified vulnerabilities before patches are deployed. The latest emergency patch addresses a browser flaw that impacts every version of Windows; if unpatched, bad actors could potentially take control of the PCs. While further details have yet to be seen from Microsoft, it appears that bad actors would leverage a phishing campaign to prompt targeted users to click on a link redirecting them to a fake website. Without a dedicated patch management process and team, patching can be overwhelming, tedious and difficult. Most importantly, vulnerabilities must be patched as rapidly as possible. Bad actors are quick to act. When the EternalBlue exploits were released in 2017, it took less than 60 days for the malware leveraging those exploits to go wild. And research shows that bad actors are often quicker to take action than security teams when it comes to zero-day exploits, targeted phishing campaigns and other methods. Here’s a look at some Forescout patching guidance as well as patch management guidance from NIST. This latest IE vulnerability is a reminder to use modern browsers with a built-in sandbox—a contained environment to keep malware and other security threats from infecting your computer—and to leverage Continuous Diagnostics and Mitigation (CDM) to ensure devices are compliant and up to date the moment they join the network.