Seven years ago, I wrote about how cloud security configuration errors were putting enterprise data at risk. Amazon storage buckets were being left open to the public left and right, with millions of sensitive records exposed. Companies were new to the whole cloud thing, and cloud providers weren’t making it easy to lock everything down the way it should be.

You’d think that by now enterprises should have their cloud assets locked down. Shouldn’t they?

Unfortunately, they still don’t.

According to an April report by Qualys, 28% of organizations surveyed had a cloud or SaaS-related breach in the past year, 24% said that misconfigured services posed the biggest risk to their cloud environments in third place after human error and targeted cyberattacks.

Qualys also looked at 44 million virtual machines hosted in the public clouds, and found that, on AWS 45% had misconfigured resources, on Google Cloud Platform (GCP) 63% had misconfigured resources, and 70% had misconfigured resources on Azure.

The cybersecurity company also found that 63% of publicly accessible VMs had no encryption on attached EBS storage.

More tools, more opportunities for misconfiguration

The proliferation of SaaS tools that companies are using just expands the opportunities to make configuration mistakes. Data on 4.7 million Blue Shield California members was recently exposed due to a misconfiguration in Google Analytics.

“Microsoft, Google, and Amazon have handed us a problem,” says Andrew Wilder, CSO at Vetcor, a national network of more than 900 veterinary hospitals. “By default, everything is insecure, and you have to put security on top of it. It would be much better if they just gave us out-of-the-box secure stuff. Would you buy a car that doesn’t have locks? They wouldn’t even sell that car.”

This security gap is what allows third-party vendors to exist, he says. “You should be building products — and I’m talking to you, Google, Microsoft, and Amazon — that are secure by design, so you don’t have to get a third-party tool. They should be out of the box secure.”

Building an in-house security is not an option at Vetcor and the company is using the native tools provided by the hyperscalers. “And for us, right now, that’s good enough.”

Stop. Reassess. Reconfigure

Last year, according to Ayan Roy, EY Americas cybersecurity competency leader, the highest number of breaches were caused by shared cloud repositories. “That’s where we saw the maximum amount of data exfiltration,” he says. “A lot was from shared cloud stores and SaaS applications.” That’s despite the fact that the clients have cloud security blueprints and governance processes and it is because of shadow IT, he says.

Organizations are turning something on but not configuring all the security features, not turning on all the logging and monitoring, not turning on MFA, not locking down access. “Business wants to go quickly and time to value is absolutely important.” But they do not include the cybersecurity teams on these decisions and that is where the problem begins.

“Cyber becomes an afterthought. If they’re in the right conversation, they can do things more proactively instead of doing things retroactively.”

One proactive measure that any company can take is to drive more awareness of enterprise-approved, secure platforms. That might require better communication or additional training to ensure that employees don’t go off and use insecure systems when secure options are available.

Another blind spot comes up during mergers and acquisitions, he says. “Be proactive in acquisitions,” says Roy. “Do the due diligence and account for that, and make sure you have the right investment plan for cybersecurity.”

Top cloud configuration mistakes

According to Scott Wheeler, cloud practice lead at Asperitas, the bigger the organization or the more regulatory oversight it faces, the fewer cloud configuration errors it will see. “But if you get to the bottom of the Fortune 1000, say, if you’re a manufacturing company and the implications of making a mistake are perceived not to be so huge, and they don’t have regulators hovering over them, they might be fast and loose. That’s where you see errors happening.”

And the smallest organizations have tremendous problems since they don’t have the staff or the tools to manage configuration risks. This includes exposed storage buckets or web services, excessive privileges.

“The whole concept of zero trust is predicated on the fact that you can lock down access to the minimum of what you need,” says Wheeler. “But that’s hard to do.” People often increase permissions during development and don’t put them back when things go into production.

The biggest mistake that Wheeler sees is that databases or other cloud assets don’t communicate over secure private networks. “A lot of times, out of the box, those services don’t have that,” he says. “It takes some work to configure it so that it’s strictly private network traffic to my private cloud environment or my on-prem environment. That’s a huge one, and we see a lot of hacks with that — and a lot of compromises on internal environments.”

Lack of multi-factor authentication is another risk as MFA helps protect cloud environments against risks such as leaked credentials. Lack of encryption is another concern, according to Thales, only 51% of sensitive cloud data is encrypted, and only 8% of enterprises encrypt 80% or more of their cloud data.

9 tips for cloud configurations to reduce data exposure

1. Implement multi-factor authentication everywhere

Require MFA for all cloud access, not just some users. The reason that so many companies are reporting data breaches involving their Salesforce instances is that they didn’t have MFA properly set up and configured for their user accounts, allowing adversaries to take over user accounts.

2. Default to private networks for all services

Configure databases and cloud services to communicate only over private networks, not the public internet. According to Scott Wheeler at Asperitas, many services don’t default to this option and is the top misconfiguration that he sees in data breach investigations.

3. Encrypt sensitive data at rest and in transit

Only 51% of sensitive cloud data is encrypted, according to Thales. Today, encryption should be the default for all new and existing resources. And with quantum computing on the horizon, enterprises should already be using quantum-proof encryption algorithms to defend against harvest-now, decrypt-later attacks.

4. Apply least-privilege access controls

Giving users and systems access to the minimum possible resources is a massive headache. And when additional privileges are granted for a particular reason, they’re often not downgraded again when the need passes. But least privilege is a cornerstone of modern zero trust security principles, and overprivileged accounts can easily lead to data losses.

5. Use infrastructure as code for all changes

When administrators or users make changes to cloud configurations in the cloud management consoles, it’s difficult to track those changes and to revert them if something goes wrong. Plus, humans can easily make mistakes. The solution experts advise is to adopt the principle of “infrastructure as code” and use configuration management tools so that all changes are checked against policies, tracked and audited, and can easily be rolled back.

6. Scan for misconfigurations continuously

And it’s not enough to check that configurations are set currently when the cloud assets are first set up. Companies need to make sure that they don’t change. Cloud providers offer some native tools to help with this, and cloud security posture management tools can fill in the gaps or provide multi-cloud monitoring.

7. Lock down storage buckets and disable public access

Unsecured Amazon S3 buckets were all the rage with hackers a few years ago — and they are still a problem for companies. According to Tenable’s analysis of cloud environments, 9% of publicly accessible cloud storage contains sensitive data. Use bucket policies and access controls to ensure storage is private by default, and if you change its availability to public for testing or some other reason, make sure to make it private again when you’re done.

8. Enable comprehensive logging and monitoring for all deployments

Companies will often have monitoring for major cloud services, but shadow IT deployments are left in the dark. This is less a technology problem than a management one and can be addressed by better communications with business units and a more disciplined approach to deploying technology on an enterprise-wide level. With the rapid pace of technological change, this is harder to do but also more important, as AI and other new cloud services may touch sensitive data and processes.

9. Start secure from day one

Build security into your cloud architecture from the beginning — it’s much harder to retrofit later.

Will AI solve every problem?

Vendors are promising that AI will make security easier, cheaper, and more effective.

But will generative AI really solve the cloud configuration problem?

It could, says Vetcor’s Wilder. “But it’s not going to happen as quickly as people are saying. I really think agentic AI has the possibility to enhance what security teams are doing, to alleviate the lack of resources we have. But there’s a lot of organizational change management that goes into that as well.”

Read More