Top Four Reasons for Data Loss from Databases (and what to do about it)

By Bob Bentley, Imperva

The need for data loss prevention (DLP) is well understood by IT security practitioners. As organizations embrace cloud-based managed database services such as Amazon RDS and Amazon Redshift, these risks don’t go away, and in many ways become more serious. Although AWS takes the security of their infrastructure very seriously, it is up to individual customers to secure their own data and access to it.

Usually, the term DLP brings to mind protecting files containing confidential or proprietary information such as contracts, product designs, and internal financial analyses. Protecting this unstructured data stored in file systems is a critical security and compliance objective.

However, for a complete DLP strategy, it’s equally important to protect sensitive structured information stored in databases, such as personal identifiable information (PII), credit card information, customer data, and medical records. Databases are arguably the most valuable targets to attack because they hold so much critical information concentrated in a single repository.

Data loss incidents can often be traced to attacks from outside the organization – most often when criminals obtain legitimate users’ logins through phishing attacks. These can wreak havoc, such as what we recently witnessed with the Colonial Pipeline attack. Data loss due to insider attacks by current or former employees, contractors, or business partners may be less dramatic but are no less devastating. Just ask FacebookMarriottCoca-ColaTesla, or Microsoft.

Here are four of the most common danger areas for data loss from databases (not including well-understood IT basics such as ongoing maintenance and upgrading/patching the infrastructure and the databases themselves).

1. Database misconfiguration

Databases with weak security posture are shockingly prevalent, provide a bonanza for attackers and pose huge risks for organizations.

What should you do?

  • Make sure your databases are accessible only to users with appropriate credentials and from expected locations.
  • Change the default access credentials.
  • Encrypt the data.
  • Set up a good backup plan (including protection of backups).

2. Inappropriate user access privileges

Another major threat of data loss comes from improper user access privileges. High-privilege accounts have dangerous power that can be misused to steal data. It goes without saying that it’s extremely important to account for and keep tight control over all the accounts with elevated privileges. These are a vector for both insider and external attacks.

What should you do?

  • Review access privileges regularly to ensure they are accurate and appropriate.
  • Keep a close watch on user accounts that have high-privilege access.
  • Review user accounts regularly and disable service accounts and orphan accounts.

3. Incomplete data inventory

Security teams must know where sensitive data is stored to have any chance of protecting it. Organizations’ IT landscape is constantly changing, especially with the trends toward more cloud-based managed database services.

What should you do?

  • Establish a data inventory tracking mechanism if you don’t already have one.
  • Regularly look for new or changed data repositories.
  • Define classes of data by sensitivity level and track where the most sensitive data resides.

4. Undetected security incidents

To respond effectively, security teams must be aware of event clues that could indicate a possible data loss incident. Even when using the best tools, it’s a daunting task, like finding a proverbial needle in the haystack, and SOC operators also struggle with alert fatigue.

What should you do?

  • Define clear access security policies – who can do what, when, in what way, etc.
  • Set up a mechanism to receive alerts quickly if security policy is violated, and define a clear resolution process.

Best practices for preventing data loss

You undoubtedly noticed that many of the “should do” tasks listed above will place a big strain on your security team. In addition to specialized domain knowledge, the team must devote a lot of time and resource bandwidth to carrying out repetitive, time-consuming routines.

The best way to address these challenges is by using specialized database security tools that can assist your team as much as possible, helping them deal with scale by automating repetitive tasks and supplying industry-leading domain expertise out-of-the-box.

Imperva’s Cloud Data Security (CDS) delivers these benefits and more. Labour-intensive tasks such as discovering Amazon RDS and Amazon Redshift databases and classifying sensitive data are automated and operate continuously. It automatically checks security posture for all your discovered databases and helps guide users in remedying any issues. It also comes out-of-the-box with security policies that will help protect against data loss, notifying you of violations or risky behaviours, especially by high-privilege users.

Explore how Imperva can help your organization protect against data loss with a free 30-day trial.