12 Biggest Cloud Threats And Vulnerabilities In 2020

From misconfigured storage buckets and excess privileges to Infrastructure as Code (IoC) templates and automated attacks, here’s a look at 12 of the biggest cloud threats technical experts are worried about this year.

ARTICLE TITLE HERE

Heat In The Clouds

Data breaches, cybercrime and targeted attacks in the cloud have driven demand for cloud security products and services in recent years. Recent years have witnessed numerous high-level data breaches as polymorphic, self-mutating codes and evasion techniques have made traditional security technologies and endpoint protection mechanisms obsolete.

The increased sophistication of hacking techniques and technological advancements in cyberespionage are expected to unleash new cloud threats and vulnerabilities such as ransomware, malicious insiders, DDoS and zero-day threats. The industry is also being shaped by geographic and industry-specific regulations that provide strict rules around data governance and privacy.

The cloud security market is expected to expand at a 13.9 percent compound annual growth rate and become a $12.63 billion market by 2024, according to Grand View Research. North America currently holds the largest share of revenue due to increased awareness of cyberattacks and corporate espionage.

id
unit-1659132512259
type
Sponsored post

From misconfigured storage buckets and excess privileges to automated attacks and Infrastructure as Code (IoC) templates, here’s a look at 12 of the biggest cloud threats and vulnerabilities technical experts are worried about this year.

12. Multitude Of Configuration Options

Organizations can navigate the shared responsibility model successfully if they use products and integrations properly, which really comes down to customer education, according to Matt Pley, Fortinet’s vice president of cloud and service providers.

Configurations are the most common source of errors given the number of mechanisms and things users need to know inside the cloud, Pley said. Getting a guided tour through the configuration is so important when building out cloud applications and infrastructure, according to Pley.

For organizations building out remote access during the coronavirus pandemic, Pley said it was important for them to provide a way that users could come in and connect to the public cloud infrastructure. There are many different ways cloud infrastructure and authentication procedures can be configured, Pley said.

11. Lack Of Continuous Scanning

Clients often aren’t aware of new items in their environment since applications are constantly getting spun up and down, and rapid deployment could lead to the rapid introduction of problems, according to Onkar Birk, Alert Logic’s chief product officer.

The ease with which apps can be introduced into an environment has made it difficult for companies to detect and orchestrate security around them, Birk said. Clients often have a multitude of departments spinning up cloud applications, and Birk said it’s difficult for companies to centrally manage that if they aren’t fully aware of what’s going on.

Businesses should make sure they’re continuously scanning to ensure all data is encrypted and that there aren’t any backdoor versions of the server that are accessible, said Rohit Dhamankar, Alert Logic’s vice president of threat intelligence products. Too often, Dhamankar said the encryption algorithms in place are weak, which ends up leaving SSLs, servers and serverless environments vulnerable.

10. Interconnectivity Of Cloud Functions

Many organizations don’t understand the basics of how to configure and harden cloud technologies as well as how cloud services interact with one another, according to Sam Bisbee, chief security officer at Threat Stack. Virtual machines are increasingly considered users in the context of a cloud environment and are therefore leveraging APIs from public cloud vendors to request keys and change infrastructure.

Therefore, if an organization has a compromise, Bisbee said the attacker can make a network call and start controlling the infrastructure. The interplay between networked services is something that most teams aren’t prepared to deal with, according to Bisbee.

For businesses deploying Kubernetes for the first time, Bisbee recommended using Amazon Elastic Kubernetes Service (EKS) since it shifts responsibility for the integrity and responsibility of the data to AWS and only requires the customer to learn a small amount about maintenance. Bisbee said securing a container deployment on one’s own is a non-trivial task.

9. Lack Of Adherence To Policies

Organizations will often write cloud security policies in a document and hand it over to the DevOps team without addressing thoroughly considering how these policies will be put in place, according to Steve Quane, Trend Micro’s executive vice president of network defense and hybrid cloud security.

Security teams expect someone else in the organization to configure and implement the cloud security policies, while DevOps teams don’t do manual configuration or implementation and expect a Terraform script or something automated, Quane said. Account information is needed to pull up APIs, but given that most organizations have a bunch of different account owners, it isn’t clear who to ask, Quane said.

The gap between written policy and tools to automate the implementation process means organizations struggle to know what’s in the cloud and wrap their arms around it, according to Quane. Given how frequently misconfigurations occur in the cloud, Quane said the risk is that department are spinning up unsecure infrastructure in the cloud and no one else in the organization is even aware.

8. Breakdown In Shared Responsibility Model

Adversaries are taking advantage of a breakdown in the shared responsibility model as it relates to the data access rights and data standards, according to Stu Solomon, Recorded Future’s chief operating officer. In a traditional structured environment, he said users are granted access to data based on their job role or responsibility and the administrator can monitor, maintain, control or manage their access.

And when migrating into a cloud environment, Solomon said data identification, data classification and a constant review and reconfirmation of an individual’s need to access that data must continue. The monitoring and enforcement of individual access rights can sometimes be overlooked during the migration process itself, according to Solomon.

Migrations are typically initiated and executed outside the security team by business and operational decision-makers, and security practitioners are often now involved in the process whatsoever, Solomon said. As a result, Solomon said data access issues can crop up once the migration is complete and day-to-day operations have returned to a normal basis.

7. Misconfiguration Of Serverless, Container Environments

Moving to serverless and containers environments has created a new perimeter and new types of workloads that organizations need to learn how to protect, according to Marina Segal, Check Point’s head of product management for cloud SecOps and compliance.

Since serverless environments don’t have underlying infrastructure, Segal said companies must ensure the function itself has the right set of definitions and policies in place that won’t allow for the execution of malicious activities. There must also be a presence in run time to analyze the behavior of functions and block anything that’s abnormal, according to Segal.

If unencrypted keys are left as plain text in a user’s code and that code ends up in a public repository or getting exposed, Segal said attackers can leverage those keys to get to many other places in the company’s environment. Businesses should leverage a cloud-native key management system and make sure keys are encrypted and rotated instead of leaving them in plain text as part of the code, Segal said.

6. Lack Of Security Around Databases

Too many organizations are leaving default database configurations in place as they rush to market with tools that might be in more of a prototype than production state, according to Tim Mackey, principal security strategist at Synopsys. As companies fight for market share in the cloud, many don’t require third-party sign off on code changes and fail to examine deployments after the fact for any errors.

The database defaults or efforts to secure the instance by the person creating the database might not be sufficiently hardened, Mackey said. Companies therefore need either expertise on staff or through a channel partner around MongoDB, Microsoft SQL server and Oracle databases to ensure that all database instances have been identified and locked down, according to Mackey.

From a regulatory standpoint, businesses must ensure that only users that need access to data in an unencrypted form are granted access in order to maintain compliance with PCI or CCPA, Mackey said. The biggest thing companies must do is ensure measures are in place to audit for expected behavior, which Mackey said will help with incident response by flagging potential asset breaches or data leakage.

5. Infrastructure As Code Templates

Developers are increasingly using templates they’ve found in places like GitHub as the basic building blocks for their cloud infrastructure, but putting these templates right into the cloud often introduces misconfigurations in the environment, according to Matt Chiodi, Palo Alto Networks’ chief security officer of public cloud.

Given the massive number of cloud migrations happening, Chiodi said DevOps teams have begun using these templates over the past two years to build and scale quickly. Developers typically start by only using these templates in a dev environment, but they’ll often end up in production environments with cloud storage logging disabled, meaning that potential security events can’t be identified or attributed.

Most templates are created through a three-step design, code and deploy process that too often doesn’t include a fourth step of scanning these templates for security issues, according to Chiodi. As a result, Chiodi said nearly 200,000 Infrastructure as Code templates have high or medium security vulnerabilities, which can result in unnecessary exposure to attackers.

4. Excess Privileges

Too many organizations fail to operate by the principle of least privilege in the cloud, making exceptions that result in too many people being granted administrator access to services and accounts, said Alert Logic’s Dhamankar. Between 8 and 9 percent of Alert Logic’s customer base has excess privileges on their accounts in areas like databases, which Dhamankar said can be a source of major trouble.

A lot of services in the cloud are interconnected, meaning that one exploited password could provide access across the entire cloud network if the administrative rights allow for that, said Alert Logic’s Birk. As a result, Birk said a breach or attack on one individual account in the cloud could lead to exploit of greater magnitude.

Organizations must examine user behavior for irregularities as compared with other users in a similar role, Birk said. Companies should also consider what privileges the user had in the past and leverage machine learning to analyze what constitutes normal behavior for that particular user, according to Dhamankar.

3. The Security Team Itself

The traditional approach to secure directly conflicts with how the cloud is being used, and security teams can no longer expect to be the gatekeeper since departments can just go around them to get stuff done, said Mark Nunnikhoven, Trend Micro’s vice president of cloud research. Security teams tend to be too worried about zero-day threats even though errors and misconfigurations cause more trouble.

Security teams often come across to other departments as arrogant or excessively focused on attacks, especially when there’s no business context backing them up, Nunnikhoven said. And the security teams often find it easier to keep doing things the same way they’ve always done it since they’re constantly in firefighting mode and feel like they don’t have enough time to get their heads above water.

Security teams must become a trusted resource within their own organization by educating, training and informing other departments, Nunnikhoven said. They should also enable teams practicing a DevOps philosophy to move forward by delivering security in an automated fashion that checks for misconfigurations in a way that’s compatible with how the team operates, according to Nunnikhoven.

2. Low Barriers To Entry For Bad Actors

The ubiquitous and available nature of compute and storage capabilities in the cloud has resulted in low barriers of entry for those looking to leverage cloud environments for malicious or nefarious efforts, according to Recorded Future’s Solomon.

First off, Solomon said the cloud can be used as a launching point for attacks since it gives adversaries a relatively anonymous environment for organizing nefarious activity that can be easily set up or broken down. Second, bad actors can easily setup or stand down compute capability in the cloud, allowing it to host infrastructure or machine power for malicious activities from DDoS attacks to phishing campaigns.

Employees with significant cloud responsibilities or key access also become a valuable attack target for threat actors, Solomon said. Finally, Solomon said hackers can hijack privileged access communications or flows in public or hybrid cloud scenarios that are less secure than intended.

1. Automated Attacks

Automated attacks in the cloud have become easier and more popular as software gets less expensive, the quality of attacker builds improve, customers put more systems online and environments become more complex, according to Threat Stack’s Bisbee.

The ability for any developer to put a system on the internet with just the swipe of a credit card has created an opportunity for highly leveraged automated attacks, Bisbee said. Software advancements have made large-scale data processing and collection easier for adversaries, Bisbee said, and there are often gaps between how a policy document said things work versus how everything is actually running.

In order to defend against automated attacks, Bisbee said organizations must know where all their systems are, what’s running on them, and what normal is. Businesses must adopt automation in their defense as well, Bisbee said, leveraging software engineering capabilities to automate the isolation and containment of potential incidents so that humans have more time to investigate.