The Cloud is a new playground that changes the rules and organization of the ISD, including security. Public cloud providers offer a shared responsibility model, where the provider manages cloud security, and where cloud security is the responsibility of the customer. However, this is a very different security model from the on premise one. We propose here to discover its principles and good practices.
Security on premise: what we should forget
Many security practices on premise become obsolete by the cloud and its native services. Continuing to apply them can be counter-productive, without providing the expected security.
We have often found that CIOs, trying to replicate security practices applied internally in the cloud, have created long and unsuitable procedures for the cloud. With the consequence of losing the expected benefits of the Cloud, such as the acceleration of time to market or scalability.
If the security is too restrictive, users will try to find ways to bypass! You need to keep the fundamentals of security to be effective on the cloud. But change the implementation methods to adapt to the new requirements of DevOps teams. This involves:
- Replace traditional tools with security features already embedded by default on the Cloud
- Leverage the flexibility of the Cloud and DevOps & Agile principles to move from manual right-side logic to shift-based logic.
Secure access to the cloud environment
In the on premise universe, the trend is to control access to administrative infrastructures by centralizing flows through a bastion, via Citrix for example. This can have a big impact on productivity in a cloud environment, without the environment being really secure.
For a Cloud environment, one will use either a VPN, a bastion, or both, respecting the principle that the machines must not be accessible from the outside. The bastion allows a rebound to another instance, by communicating the private IP address of the machine and the SSH key. The bastion then allows the connection to this machine only.
Network filtering
Managing security on each machine on premise was complex, which is why we set up a central access point with a DMZ.
The cloud nevertheless implies thinking upstream of the security of every application. Even before the development of an application it is necessary to define what it will need to operate: which ports, which communications to which IP, which opening on the internal/external network?
Every project and each application being different, the security parameters adapt according to the needs. In fact, it is not possible to be limited to an infrastructure model with pre-established rules.
Watch out: Keep in mind that the cloud model requires you to set the security of each machine, including what it can access.
We recommend maximizing the use of cloud security as the first layer of security.
We will also choose the most suited cloud services to the needs of the application, such as an ALB (application load balancer) to filter and accept only certain ports (and avoid exposing the machine on the Internet), with a SSL certificate to do HTTPS. This does not exempt to correct the flaws of the application itself, but already allows to protect against port scans or bruteforce.
Point of vigilance: Outside of the experimental phases or POC, we also recommend using a tool to scan its configuration and check that nothing has been forgotten. Despite good intentions, it is always possible to forget to close ports that have been opened as part of a test.
Access management
The management of identities and access remains a fundamental security practice to protect against the risk of mishandling or forgetting.
In an on premise environment , one aimed at deploying a management based on central and unique repositories. In the cloud world, we need to consider a new layer which is the management of access to the cloud infrastructure itself.
As good practice, it will therefore be necessary to define, in advance, the most appropriate access architecture for the uses that will be made of the cloud and its users, and ensure the mesh between cloud account, cloud users and application access (creation and configuration), master and secondary accounts, accounts dedicated to transverse activities such as log management, auditing, etc., shared accounts, logic for assigning access policies to users, roles, groups, use of the boundaries principle. And plan to revise this architecture regularly to remain consistent with the new uses of the organization.
Attention: Rights allocation procedures must be adapted from the outset to maintain control over the uncontrolled expansion of accounts. This ensures that the most critical actions can only be assigned to a small group of administrators with the appropriate knowledge.
Encryption
As Werner Vogels would say: “Dance like no one is watching. Encrypt like everyone is. “
In an environment most of which is only accessible internally, the requirement for encryption seems less important. Especially since not all organizations have a suitable infrastructure and are reluctant to assume the costs and operating rules that come with a PKI device: purchase and distribution of certificates and keys, frequent rotation, etc.
The cloud has default features that make it easy to practice. Suddenly, there is no excuse, we must encrypt everything… or almost: databases, S3, communications among instances, including log escalation, the information could be exploited if intercepted.
It is necessary to encrypt the data at rest and in transit, but still think about what needs to be protected: for example, we will encrypt the database on a website, but not the logo. Also note that encrypted communication takes longer, and therefore it has an impact on performance.
Securing the CI/CD chain internally
If we must obviously secure the front and the network, we will not forget to secure each of the machines: we have already seen a lot of security on the front, with insecure machines with powerful roles. This is a strongly discouraged practice.
Attention: do not focus solely on the external danger and forget the one coming from the internal.
In the case of the CI/CD pipeline, for example, the machines that carry the pipelines have strong rights, such as the updating of a service (ECS, EC2, Lambda, …), or certain important information such as the identifiers of the databases.
It is therefore important to detect any illegal action on these machines (booting a shell, etc.), and just as important to partition projects to avoid sharing information from a project to another. Especially when freelance providers have access to CI/CD.
This problem did not arise before the generalization of the CI/CD: the Ops managed the setting in production. In this new paradigm, the challenge of security in the cloud is to segment applications and adapt to new methods of development and deployment. It also requires to accompany the change of mentalities, since the tendency to “do as before” remains very strong.
The new applications case
For Cloud Native applications, in the Cloud context, we include security when designing applications. Think also throughout the life cycle of the application, including the CI/CD where it will avoid putting passwords in the clear.
Among the important reflections:
- Identify the ports used by the application (internal/external), the applications with which it interacts, and choose which ports and IPs to open accordingly.
- Configure Load Balancer to first filter ports and URLs.
- Precisely define the roles to give to each type of machine, to associate them with standard predefined security parameters.
- Use a secret manager to manage passwords and tokens, and avoid revealing sensitive information. Developers can deploy in production without knowing the passwords of databases. In the same way it is also possible to generate time-limited token, just the time of deployment. If the token is compromised, it will last for a very limited time.
The controls
In a relatively linear development environment that extends over several months, the general trend is to position security checkpoints at one time, at the end of the phase (before production) and often manually. In an agile environment that relies on a CI/CD chain, this is no longer possible.
It is necessary to define general control rules, whatever the project, to supplement if necessary with rules specific to each project, by defining warning thresholds according to the criticality of the environment and the application. We apply these controls to both the cloud infrastructure, the CI/CD chain, and the developed code.
Attention: set up continuous and other controls in “spot check” mode, and automate monitoring, raising alerts and, where relevant, remediation. As a first step, we will use the compliance features in the Cloud environment, before acquiring or developing in-house control tools with wider coverage.
Cooperation and training for security
With the rise of the Cloud, the main principles of security have not fundamentally changed, but they now apply differently. Security, which was the business of a few specialists, and concentrated in single points of the infrastructure, became distributed.
The identification, the deployment of security measures and the carrying out of controls are no longer the experts pre-square. We will still need security specialists. But there has been a transfer of skills to DevOps: some of the security tasks now have to be handled by them. This requires training security teams in the Cloud and DevOps, and vice versa to disseminate best practices within teams, so that security teams understand the needs and ways of IT teams, and that they can integrate the security concepts as soon as possible in partnership with the security teams.