Recently I drove a penetration test of several new cloud services. In one of the meetings, a stakeholder asked: “so, which network ranges are you scanning?” My reply was to the effect of: “This is Amazon. Public IPs are a bit all over the map.” Based on the concerned gaze I got in return, I knew this person needed to be educated on vulnerability assessments in the era of cloud computing. Here are the highlights of that conversation.
The Old Days
Prior to the proliferation of cloud, many companies ran their apps in-house. This meant finding a data center and subsequently buying your own servers, your own storage and so forth until you built your own little self-managed bubble of IT infrastructure. While you had complete control over your infrastructure, this approach was both costly and time-consuming.
In this type of configuration, the scope of a security assessment is relatively straightforward. You have a finite range both internal and external IP addresses, and pointing a vulnerability scanner at these ranges was a relatively sufficient means of performing asset and discovery.
Cloud Changed Everything
Cloud computing changed the way assets are allocated. With software as a service, the vendor may ensure the location and code of that cloud app is secure, but the end-customer (you) is typically on the hook for securing that app by way of utilizing the built-in security controls. Once those controls are configured, testing the app is tricky. Even though you may know the URL to the application, the vendor may not be happy with you poking around your application as it could impact other tenants on the platform.
Moreover, each SaaS app is a little different, and thus requires a highly customized assessment approach. For example, a common Salesforce.com threat model may include checks around determining whether API keys are regularly rotated, there are only a few system administrators, SSO is enabled, and data loss analytics are applied to key system events. (Such as access by ex-employees or mass data dumps via API.)
Infrastructure as as service (think AWS, Azure, Google Apps Engine, Heroku) is equally challenging. IaaS also has constraints around the testing of multi-tenant infrastructure, but additional the scope or landscape of an audit is constantly changing. While most cloud providers enable customers to build “traditional” private networks within the public cloud, the external facing services (public IPs and ports) can change very rapidly and enumerating those service endpoints requires skillful ingestion of cloud logs by way of APIs.
The cloud hasn’t made every security tool obsolete, but has instead necessitated the automation of such tooling. Simply asking the network administrator for the colo’s network ranges is no longer a sufficient means of enumerating you network. Using AWS as an example, one must routinely pull logs from AWS Config and/or CloudTrail to obtain a “current” view of the network landscape.
As for web apps-- well, they’re still web apps. “Traditional” tools like AppScan, WebInspect, and Burp all still apply. But if you have a team of developers committing to changes to Git every day, you’ve got a whole new slew of issues to address. Is your source code secure? Are security unit tests part of your CICD pipeline?
Whether you consider cloud evolutionary or revolutionary, I posit that your security tooling and processes can evolve either way. However, traditional tooling must be be automated to keep pace with an ever-changing landscape. On the SaaS front, security assessors must really understand the business flows within the context of the business application. Put simply, what’s dangerous for Workday.com may be commonplace in Google Apps. And both SaaS and IaaS requires new skills within the arenas of software development and cloud computing. Does your company have what it takes to secure the cloud?