SQL Server – Making Backup Compression work with Transparent Data Encryption by using Page Compression

By | 15th July 2016

Encrypted data does not compress well if at all so using the BACKUP with COMPRESSION feature will be ineffective on an encrypted database using Transparent Data Encryption (TDE), this post deals with a method of combining Page Compression with TDE and getting the best of both worlds.

Transparent Data Encryption (TDE) feature encrypts data at rest i.e. the SQL Server Storage Engine encrypts on write to storage and decrypts on reading – data resides in the buffer pool decrypted.

Page compression is a table level feature that provides page level dictionary and row/column level data type compression, pages read from storage reside in the buffer pool in their compressed state until a query reads them and only at that point are expanded which gives better memory utilisation and reduces IO. You need to be aware both Encryption and Page Compression add to the CPU load on the box, the additional load will be dependent on your access patterns – basically you need to test and base your decision on that evidence, without drawing this into a discussion around storage tuning you tend to find that using Page Compression moves a query bottleneck from storage and more into CPU simply because less data is being transferred to/from storage so latency is dramatically reduced – don’t be put off if your box is regularly consuming large amounts of CPU – it’s better to have a query return in 10 seconds with 100% CPU rather than 10 minutes with 10% CPU and storage the bottleneck!

Why is data not held encrypted in the Buffer Pool? TDE encrypts the entire page of data at rest on storage and decrypts on load into memory (https://msdn.microsoft.com/en-us/library/bb934049.aspx), if it were to reside in the Buffer Pool it would require page header information in a decrypted state so that things like Checkpoint and inter-page linkage work as such there would be security risk. Page Compression does not compress page header information which is why pages can reside in the Buffer Pool in their compressed state (see https://msdn.microsoft.com/en-us/library/cc280464.aspx).

Coupling Page Compression with TDE gives you the benefit of encryption (because that phase is done on write/read to/from storage) and compression (the page is compressed when it either read or written to storage via the TDE component in the Storage Engine – basically your data is always decrypted when coming to compressing/decompressing.

The table below shows space comparison and timings between normal (no TDE or Compression), Page Compression on it’s own and TDE coupled with Page Compression. You will see that only Data in the Database is compressed – log writes are not compressed so there is only a marginal improvement in using Page Compression but you can see TDE has no effect on Page Compression to the Transaction log either.

PageCompressionWithTDE

If you use this approach then compress all the database tables with Page Compression, also – stop using the COMPRESSION option of BACKUP because it will save resource – SQL Server won’t be trying to compress something already compressed!

Example

Prepare the test database, we use FULL recovery to show the performance and space to the transaction log:

 

CREATE DATABASE [TEST_PAGECOMP]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'TEST_PAGECOMP', 
  FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL13.S2016\MSSQL\DATA\TEST_PAGECOMP.mdf' , 
  SIZE = 11534336KB , 
  MAXSIZE = UNLIMITED, FILEGROWTH = 65536KB )
 LOG ON 
( NAME = N'TEST_PAGECOMP_log', 
  FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL13.S2016\MSSQL\DATA\TEST_PAGECOMP_log.ldf' , 
  SIZE = 10GB , FILEGROWTH = 65536KB )
GO

alter database TEST_PAGECOMP set recovery FULL
go

backup database TEST_PAGECOMP to disk = 'd:\temp\INITIAL.bak' with init;
go

 

Test with SQL Server default of no-compression and no-TDE.
 

use TEST_PAGECOMP
go


--
--	No compression
--
create table test (
	id int not null identity primary key clustered,

	spacer varchar(1024) not null
) ;

set nocount on;

declare @i int = 0;

begin tran;

while @i <= 5 * 1000000
begin
	insert test ( spacer ) values( replicate( ' a', 512 ) )

	set @i = @i + 1;

	if @i % 1000 = 0
	begin
		commit tran;
		begin tran;

	end
	
end

if @@TRANCOUNT > 0
	commit tran;
go


dbcc sqlperf(logspace) 
--	Size		Percent Full
-- 6407.992		99.49185

backup log TEST_PAGECOMP to disk = 'd:\temp\TEST_PAGECOMP_NOComp.trn'
--Processed 811215 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP_log' on file 3.
--BACKUP LOG successfully processed 811215 pages in 15.170 seconds (417.772 MB/sec).

checkpoint
go

backup database TEST_PAGECOMP to disk = 'd:\temp\TEST_PAGECOMP_NOComp.bak' with init;
go

--Processed 718128 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP' on file 1.
--Processed 4 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP_log' on file 1.
--BACKUP DATABASE successfully processed 718132 pages in 12.157 seconds (461.495 MB/sec).

Test with Page Compression only.

--
--	Page will use dictionary so really strong compression
--
drop table test
go

create table test (
	id int not null identity primary key clustered,

	spacer varchar(1024) not null
) with ( data_compression = page );

set nocount on;

declare @i int = 0;

begin tran;

while @i <= 5 * 1000000
begin
	insert test ( spacer ) values( replicate( ' a', 512 ) )

	set @i = @i + 1;

	if @i % 1000 = 0
	begin
		commit tran;
		begin tran;

	end
	
end

if @@TRANCOUNT > 0
	commit tran;
go

backup log TEST_PAGECOMP to disk = 'd:\temp\TEST_PAGECOMP_Comp.trn' with init;
--Processed 706477 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP_log' on file 1.
--BACKUP LOG successfully processed 706477 pages in 12.035 seconds (458.608 MB/sec).

checkpoint
go

backup database TEST_PAGECOMP to disk = 'd:\temp\TEST_PAGECOMP_Comp.bak' with init;
--Processed 9624 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP' on file 1.
--Processed 2 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP_log' on file 1.
--BACKUP DATABASE successfully processed 9626 pages in 0.207 seconds (363.274 MB/sec).

go

Test with both Page Compression and TDE:

drop table test
go

checkpoint
go


USE master;  
GO  
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**************';  
go  
CREATE CERTIFICATE MyServerCert WITH SUBJECT = 'My DEK Certificate';  
go  

USE TEST_PAGECOMP;  
GO  
CREATE DATABASE ENCRYPTION KEY  
WITH ALGORITHM = AES_128  
ENCRYPTION BY SERVER CERTIFICATE MyServerCert;  
GO  

ALTER DATABASE TEST_PAGECOMP  
SET ENCRYPTION ON;  
GO  

/* The value 3 represents an encrypted state   
   on the database and transaction logs. */  
SELECT *  
FROM sys.dm_database_encryption_keys  
WHERE encryption_state = 3;  
GO  



--
--	Now repeat the page compression version
--

create table test (
	id int not null identity primary key clustered,

	spacer varchar(1024) not null
) with ( data_compression = page );

set nocount on;

declare @i int = 0;

begin tran;

while @i <= 5 * 1000000
begin
	insert test ( spacer ) values( replicate( ' a', 512 ) )

	set @i = @i + 1;

	if @i % 1000 = 0
	begin
		commit tran;
		begin tran;

	end
	
end

if @@TRANCOUNT > 0
	commit tran;
go

backup log TEST_PAGECOMP to disk = 'd:\temp\TEST_PAGECOMP_CompTDE.trn' with init;
--Processed 722500 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP_log' on file 1.
--BACKUP LOG successfully processed 722500 pages in 12.284 seconds (459.502 MB/sec).

checkpoint
go

backup database TEST_PAGECOMP to disk = 'd:\temp\TEST_PAGECOMP_CompTDE.bak' with init;
--Processed 55296 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP' on file 1.
--Processed 7 pages for database 'TEST_PAGECOMP', file 'TEST_PAGECOMP_log' on file 1.
--BACKUP DATABASE successfully processed 55303 pages in 1.118 seconds (386.450 MB/sec).

go

 

Category: Uncategorised

Leave a Reply

Your email address will not be published. Required fields are marked *