Quick Script: Dropping Indexes

If Only My Doctor Ordered This..

Sometimes a quick script is just what the doctor ordered.

I recently had the task of packaging up a database to ship it offsite.  This database had some size to it, approximately 200GB with 6 tables.  While looking at the tables, I noticed that though they were relatively narrow, each had ~22 million rows in them. In my environment this wasn’t wild or out of the ordinary, however, I did think that the disk space usage was a little out of whack given what I was seeing.  So, I set out to see if I could make the database smaller so that it was easier to transport.

I discovered that each table had anywhere from 7-15 non-clustered indexes on them.  As we know, a non-clustered index is a partial copy of the data, so it was extra bloat that we didn’t need.  I knew those that would be consuming the data would pull in every table in its entirety, so I wasn’t overly concerned about them having index scans versus index seeks on the data.

Each table, however, did have a clustered index on it so I knew that I wanted to leave it in place.  It was only the non-clustered indexes causing extra weight that I didn’t need.  So I wrote a quick T-SQL script to help drop that weight.  While this script isn’t overly fancy or complex, it turned out to be handy and gave me the ability to manually control dropping all of the non-clustered indexes in the database.

DISCLAIMER: Do not run code you find on the internet in your production environment without testing it very thoroughly.  Period.  

It is worth noting that if the index was created from a PRIMARY KEY or UNIQUE constraint, it cannot be dropped using a simple DROP statement.  You have to use the ‘ALTER TABLE … DROP CONSTRAINT’ method to drop the index.  The script I have provided will handle that for you as well as handle XML indexes and spatial indexes.

For a given database, it generates the ‘DROP’ statements for all of the non-clustered indexes.   It’s that simple.  It also includes some additional meta-data, like the schema, about the index.  Once the query is complete, all you have to do is just copy out the command in the first column, labeled ‘Drop Command’. Take that statement and paste it into another SSMS window and execute it as you wish.  You can execute the entire thing or perform it line by line.

Of course, I’d be remiss if I didn’t say it again, please DO NOT execute that resulting command on a production database.  That command could have severe impact to your production systems.

Yes, I could have probably written a PoSH script to do this work.  Well, Ok.  I could have asked my good friend Mike Fal (B|T) to write a PoSH script for me.  He’s a wizard at it.   Yes, I too could have written a cursor/loop to go do the work for me, but I didn’t want to.  I like quick and dirty scripts such as this one.  It allows me to do what I need to do with a relatively short amount of work. This script helped to get the database size down significantly (of course I had to shrink afterward *GASP*) and then it was much easier to transport once that weight was dropped.  The consumers of the data were also very happy as this also helped them to save space on their servers




© 2015, John Morehouse. All rights reserved.

4 Replies to “Quick Script: Dropping Indexes”

  1. Thanks for the script. Note that it attempts to drop unique constraints on FILESTREAM enabled tables, which is not permitted. I added the following:

    INNER JOIN sys.tables st ON si.object_id = st.object_id

    AND st.filestream_data_space_id IS NULL –under the WHERE clause

    Also, I removed the order by on KeyConstraintID as it was throwing an error.

  2. John – thank you very much for the FILESTREAM find. I didn’t even think about that one. I’ve updated the script to reflect the changes that you made. Hope it worked well for you! Thanks for visting!

  3. You’re welcome. Don’t forget to add the part for the where clause , otherwise the filestream tables won’t get filtered out of the query. 🙂

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.