Last Updated on May 20, 2015 by John Morehouse

Last week I got into a conversation with a colleague about how SQL Server grows the database.  We were both fairly certain that the database would grow in specific sized increments.  We further assumed that the increment size would match the growth setting on the database.  However, we wanted to see it for ourselves.  Thus an experiment is born!

Hypothesis

When a database is forced to grow within SQL Server, it will do so in predefined increments rather than one large growth.  For example, if a transaction requires 100MB of additional disk space and the growth setting is equal to 1MB, the database would grow 100 times at 1MB each time rather than 100MB for a single time.

To test out the hypothesis we will do the following:

  1. Create a dummy database
  2. Evaluate the current grow setting of the database
  3. Evaluate the current size of the database
  4. Generate a transaction that will force the database to grow
  5. Evaluate the number of growth events
  6. Evaluate the final size of the database
  7. Compare the results

For simplicity, we will only be examining the data file of the database.  The log file will have the same behavior however to make it easy we will leave it out of the equation.

Let’s begin!

The Experiment

I will first create a database and then a single table.

 
USE tempdb
GO
IF db_id('DBFileGrowthDemo') IS NOT NULL
	BEGIN
		ALTER DATABASE DBFileGrowthDemo SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
		DROP DATABASE DBFileGrowthDemo
	END
GO
CREATE DATABASE DBFileGrowthDemo
GO
USE DBFileGrowthDemo
GO
CREATE TABLE SizeTest (Name NVARCHAR(4000))
GO

Note that the table has only a single column.  The column was defined as a NVARCHAR(4000) simply because I wanted to be able carefully calculate growth.  If I insert a rows where the Name is always 4,000 bytes characters, it is guaranteed that only one row will exist on a single page.  Remember that a single page is 8,192 bytes in size.

After the database is created, let’s take a look at the growth settings. This will help us to confirm our hypothesis.  We can see below that the data auto-growth is set for 1MB.

dbfilegrowthdemo_1

You can also see that the data file currently has a size of 2,240KB as shown below.

dbfilegrowthdemo_2

Assuming that we want the data file to grow 10 (ten) times at 1,024KB (1MB) intervals, or 10MB in total.  1MB is equal to 1,024KB so  10 x 1,024KB = 10,240KB.  This is also equivalent to 10,485,760 bytes, i.e. 10 x 1024 x 1024.

If the original size is 2,240KB (as shown above) the math would dictate that growing by 10MB our final data file size would be 10,240KB + 2,240KB = 12,480KB.

If I know that I want to grow by 10,240KB and a single page is 8,192B, then I would need to insert 10,485,760B/8192B = 1,280 rows total.   Note that in order to make the math easier I used bytes on both sides of the equation.

Let’s try it and see if the math is correct.

USE DBFileGrowthDemo
GO
SELECT GETDATE()
GO
INSERT SizeTest (name)
    SELECT REPLICATE('a',4000)
GO 1280

Notice that the current date & time will be returned.  This will be used later when determining how many times did the database actually grow.  Now that the rows have been inserted, what is the resulting database size?

dbfilegrowthdemo_3

It is shown above that the math is correct.   1,280 rows were inserted into the table and it grew the database out by 10MB.

Results

What about the original hypothesis?  Did the database indeed grow 10 times?  We can find this out by using the default trace, assuming that it has not yet been turned off.

Note that using the default trace is just one option.  We could also use Extended Events. I do not, however,  believe that tracking database growth is in the default system_health extended events session.  You would need to configure your own extended events session to track database growth.

DECLARE @file VARCHAR(1000)

SELECT @file = path
FROM sys.traces
WHERE is_default = 1

SELECT Databaseid, tg.EventClass, DatabaseName, FileName, LoginName,StartTime, EndTime,
    CASE
          WHEN mf.is_percent_growth = 1 then mf.size*(growth*.01)
          ELSE CAST(growth AS BIGINT)*8192/1024/1024
    END as 'Growth Rate(MB)'
FROM sys.fn_trace_gettable(@file,1) tg
       inner join sys.trace_events te on tg.eventclass = te.trace_event_id
       inner join sys.trace_categories tc on te.category_id = tc.category_id
       inner join sys.master_files mf on tg.databaseid = mf.database_id and tg.filename = mf.name
WHERE te.name in ('Data File Auto Grow')
	and FileName = 'DBFileGrowthDemo'
	and StartTime > '2015-05-09 16:50:50.240'  -- this is the time stamp from when the rows were first inserted.  
GO

What did this return?

dbfilegrowthdemo_4

10 rows!!  This means that the database did grow in 10 – 1MB increments just as we expected!  You can find the above script that I used here.

The hypothesis is confirmed!

Summary

While this experiment was fun to put together, it also show cases how important database growth management is.  In this case, the results shown here is just a small example of what really happens.  What if you had a large database that had many transactions running through it triggering many growth events? If the growth settings are not managed properly there could be unwarranted overhead being placed on the server.

Take a look at the databases within your servers.  The growth settings just might surprise you.

© 2015, John Morehouse. All rights reserved.

4 Replies to “Database Growth”

  1. “where the Name is always 4,000 bytes” – I think you mean 4,000 characters, which for NVARCHAR means 8,000 bytes.

  2. Hi LA! Thanks for stopping by!

    The instance that was tested on did not have instant file initialization enabled so you are correct. It indeed would help with the duration if it were enabled. This would only pertain to the MDF file. The log file still has to be zeroed out which depending on the growth settings, could take quite some time.

    Excellent point though. I recommend having IFI enabled and do so on all of my instances.

    Thanks!

Hey you! Leave me a comment and start a discussion!

This site uses Akismet to reduce spam. Learn how your comment data is processed.