As well as automatically scanning columns within databases and using intelligent rules to make recommendations about how they should be classified, it auto-generates static data masking sets from the classification metadata that can be used to protect the databases.
This is a timely move because many organizations, like those in the Financial Services and Healthcare sectors, are now obliged by law to ensure that all sensitive or personal data is identified and removed or protected before databases are made available for use in development, testing, analysis, training or other activities.
This is not a one-off exercise, but an ongoing effort that requires a continuous approach to data protection, typically involving three steps. First, organizations need a data protection plan to identify and classify which databases hold data that needs protecting and how. They then need to implement the plan in a way that guarantees sensitive and personal data is always removed or obfuscated by a method like masking database copies that are used outside secure production environments. And thirdly, the plan has to be maintained on a rolling basis as databases and their data expand and change.
As Bloor states in its Sensitive Data Management Spotlight paper: “This must all be done continuously. When new data enters your system, you should be automatically determining if it is sensitive, anonymising it if it is, and applying access rules as appropriate. This is most easily done via (automated) policy management, thus allowing you to manage incoming sensitive data indefinitely and thereby make sure that your organisation doesn’t relapse into noncompliance.”
This is a big challenge for many organizations, reflected in the scale of the research, resources, planning and time it requires. It’s a hard process to get right first time and impossible, without automation, to get it right every time data is created or refreshed in a non-production system.
SQL Data Catalog v2 marks a step change in this process by significantly reducing the time it takes to go from identification and classification to protection, and making maintenance far simpler. When connected to a SQL Server instance, it automatically examines both the schema and data of each database to determine where personal or sensitive data is stored.
An extensive set of built-in classification rules, which can be customized to align to particular regulatory requirements, then speed up data classification with automatic suggestions and intelligent rules based on automated data scanning. This identifies which columns need to be masked, either manually, or by using a tool like Redgate Data Masker which can sanitize the data using the auto-generated data masking sets provided.
Importantly, as databases are added, and existing databases modified, the data classifications are automatically maintained in SQL Data Catalog, and the data masking sets it creates can be updated on demand.
This policy-driven approach enables organizations to ease and streamline their data management processes by automating and maintaining their security posture. This protects their sensitive data, puts in place an auditable workflow, and ensures they stay compliant with regulatory agencies.