Should your organization move your database to the cloud? How well can cloud security concerns be mitigated?

On-premise databases are often used by organizations to store sensitive data as they are perceived to be more secure than when hosted at a third-party data center or in the cloud.

However, a recent study by Imperva shows that nearly half (46%) of all on-premise databases globally are vulnerable to attack, with countries like Australia (65%) and Singapore (64%) having much higher incidences of insecure databases.

With an increase in security vulnerabilities found in on-premise databases, should organizations migrate their core systems to the cloud? 

CybersecAsia tapped the experience and expertise of Dave Page, Vice President and Chief Architect, Database Infrastructure, EDB, for some answers.

What are the prerequisites for a secure database?

Dave Page, Vice President and Chief Architect, Database Infrastructure, EDB

Page: Above all else, do not ever expose a database port directly to the internet. Doing so is almost never necessary and gives an attacker direct access to your most critical asset.

Assuming the database access is limited only to the required application servers within an environment – such as a virtual private cloud -, make sure you have designed your applications to make full use of the security features offered by your database server.

With PostgreSQL, these will include:

  • Choosing a suitable authentication method. Certificate based authentication is a good option for an application server.
  • Limit access via different authentication methods to only the IP addresses from which they’re expected. For example, database administrators (DBAs) might login using Kerberos authentication; configure the server to only allow Kerberos authenticated connections from the machines that DBAs will connect from, and only allow certificate authenticated connections from your application servers.
  • Make use of roles within the database, and limit the capabilities of each login role to the minimum required. A group role can be used to define minimal permissions to DBAs for example, and the login role for each DBA can be made a member of that.
  • Don’t ever share login roles. Each DBA or application should connect using their own role.
  • Make sure that application roles cannot modify the database schema in any way, unless absolutely necessary.
  • Make use of access control lists (ACLs) to minimise the changes each role can make. For example, an application role should probably never have “update” or “delete” permission on an audit table.

There are many other options available as well, such as row-level security and SEPostgres, which integrates with security-enhanced Linux (SELinux).

The key prerequisite is to understand all the options available, and make use of them to enforce the principle of least privilege. This means only allowing each incoming network connection or database role to access or modify the data that is required for proper operation of the system, and nothing more.

It’s also worth mentioning development practices. While it is extremely important to ensure database access is minimized as much as possible, it is also essential to ensure that the application itself doesn’t inadvertently expose data or the ability to modify the database in undesirable ways.

Ensure your developers follow good practice; enforce the use of parameterised queries, for example, to prevent SQL injection attacks. Ensure that changes to the code are peer-reviewed to minimise bugs that might allow an attacker to gain information about the application or access data they should not. Ensure that developers have a good understanding of secure coding principles, and follow them at all times.

On-premise vs cloud database infrastructures – which is more secure?

Page: Most cloud environments have the potential to be far more secure than on-premise deployments, because there are various standards and accreditations for security, which are often not undertaken for on-premise data centers.

The major cloud providers take physical security very seriously – typically undergoing comprehensive audits such as system and organization controls (SOC) 1, 2 and 3 which is not always the case with private data centers. However, whilst you can be assured of the physical security of your data in the cloud (having of course, reviewed your chosen providers’ audit reports), you still need to ensure the non-physical security of the systems and services in use.

While there are parallels to running your own network and data center, cloud providers offer a myriad of options for managing the security of the services you choose to run. It is essential that organizations ensure their staff be proficient in the proper use and configuration of these services, and to not rely on prior knowledge of traditional infrastructure.

For example, while a traditional deployment might employ a private network, DMZ (or demilitarized zone), and public network for segregating services, with firewall rules to ensure that only permitted traffic flows between those networks, in a cloud environment you might make use of multiple subnets within a virtual private cloud (VPC).

Use of services such as elastic IP addresses, load balancers, security groups for hosts, security groups for VPCs, and more, may affect how traffic is allowed or prevented from reaching servers hosting data. It is critical to understand how these services work and how they should be configured to implement “the principle of least privilege”.

This is just one example – other services such as object storage may not have been present in a self-hosted infrastructure, so that will have to be learned from scratch, including the finer details of how to secure them.

Assuming that cloud services are properly understood and deployments are appropriately designed to take advantage of the security features offered, I don’t believe there is any reason to think that a cloud deployment of an internet-exposed application need be any less secure than an on-premise deployment of the same application at the non-physical level.

It is worth noting that, in my experience, the weakest link is typically in the application code itself. Developers are human and make mistakes. Precautions such as code review and standards for the way databases are accessed for example, are essential, whether in the cloud or on-premise.

What should organizations bear in mind when migrating their core systems to the cloud?

Page: Migration of existing systems to the cloud can happen in a variety of ways. A simple migration of an on-premise application hosted on bare metal or a virtual machine could simply be moved to a virtual server in the cloud. This is typically easy to do, and may be appropriate for smaller internal systems, provided proper precautions are taken to ensure the virtual server remains secure. This could be achieved, for example, through the use of appropriate security groups to limit access, and hosting it within a virtual private cloud, access to which is limited to a virtual private network (VPN) connection from the corporate network.

At the other end of the spectrum, a full application redesign may be required to fully make use of the services offered by your cloud provider. This may involve, for example, redesigning a web application previously hosted on a single server to run on multiple servers with a load balancer, giving the ability to have auto-scaling and auto-healing in the event that one of those servers fails. Doing this may involve redesigning the way the application implements session and database connection management to ensure that HTTP requests can be properly handled if they get routed to different servers. It may also be desirable to break up complex applications into microservices.

A more complex application redesign like this may lead to a complex virtual infrastructure. Consider using tools such as Terraform and Ansible to define and manage that infrastructure, rather than trying to do it manually which can be much more difficult. Doing so also gives the added advantage that development and QA environments can easily be created and destroyed as needed.

Above all else, ensure that staff are properly trained in the features offered by your cloud vendor, and are well positioned to plan the migration to make the most effective use of the services available to ensure that applications have the availability and resiliency required, and are easy to maintain and further develop to meet new and changing business needs.

What are some key considerations for organizations to ensure that their cloud systems stay secure, available and reliable?

 Page: Organizations need to train their staff and re-architect their existing applications to properly harness the cloud.  In my experience ensuring staff are properly trained is the biggest challenge when migrating to the cloud.It’s very easy to start small with cloud services, and see organic growth over time to significant amounts of usage. Often, users self-train. They start with small projects or development work because the cloud makes it easy, but the nature of what they’re doing doesn’t necessarily require them to pay a great deal of attention to security.

Then when larger projects begin, which will include hosting critical data or infrastructure in the cloud, the security aspects end up becoming a secondary concern or are simply not properly considered or configured due to the lack of proper training. That’s not to say that every self-taught user will be lax on security, but it is essential to ensure that all users have appropriate training to ensure they know how to configure the services they use in a secure manner, and that they keep security considerations foremost when designing their deployments.

Whether AWS, Google, Azure, or another cloud, they will have to learn how security is implemented and managed in a cloud environment as opposed to the way it was previously done. For instance, how to manage firewalls, define access control, set up VPNs, and so on.

To take full advantage of the cloud, they must also cast aside the old mindset for their on-premises systems to understand and consider the different options for running it in the cloud; they need to re-architect it in a way that is truly cloud-native.

Organizations can develop enterprise applications to run on more than one cloud provider or by leveraging multiple cloud availability zones and regions. The former offers the added advantage of not being reliant on a single cloud provider, though it is far more challenging technically and expensive.

It can be really difficult to build a truly cloud-agnostic application, and that makes use of all the facilities available on AWS, Google Cloud, or Azure. If you look beyond the virtual machines and storage systems on all the cloud vendors and look at things like load balancers, serverless functions, and machine learning offerings, they are all quite different in the way that they are implemented on each of the providers.

So having one application that can run anywhere and fully make use of the cloud-native functionality is really hard. Redundancy of providers increases the technical challenge significantly which makes it much more expensive. In most cases, using cloud availability zones, or independent segments of the cloud provider, is far more economical.

Open source databases can provide considerable technical and security advantages as compared to closed source databases. The open source code is almost certainly subjected to far greater scrutiny than closed source products, because anyone can inspect it. This increases (but does not guarantee) the chances that the code is well written and secure. Moreover, there is less “lock in” – you’re not dependent on any one company to support your databases.

For instance, the PostgreSQL open source project is a truly independent project, with numerous companies around the world working on features and providing support and services, all of which have access to the source code and have the potential to support it and help users as well as any other company.

With proprietary databases, typically only the vendor has any access to the source code, which means that only they can fix bugs or truly understand what’s happening “under the hood”.  With an open source database there is no limit to what you can do, whether by building or using extensions such as foreign data wrappers, or even rewriting parts of the database server to meet your specific needs.