Quick Script: Finding Database Owners & Security Admins

http://www.flickr.com/photos/mag3737/1420554128/

Security Is Everybody’s Responsibility

Recently, while diagnosing a permissions issue on a Production database, I realized that I didn’t really have any idea who had elevated permissions to any of databases on the instance. Particularly, I was interested in who the members of two specific database roles were, if any; database owner and security admin.

Database Roles

Let’s start with the Database Owner (db_owner). This role, basically, allows the user to do pretty much whatever they want to the database itself as well as all of the contents within it. Members of this role could drop tables, read data, add views, delete data or even drop the entire database to name just a few things. It is a definite security concern you should be aware of and make use of minimally.

Security Admin (db_securityadmin) is another database role that you should be cautious of.  Members of this role have the ability to elevate themselves or others to the database owner role.  Given that the database owner can do essentially anything to the database, you definitely want to verify who is listed as a Security Admin.

Use the below script to discover which accounts have these roles granted to it.

DISCLAIMER:  While I’ve been able to run it across multiple servers, over several hundred databases without an issue, your mileage will vary.  Please test accordingly.   NEVER run code that you find on the internet against your Production systems without testing first.  You have been warned!

/***************************************************************
-- Author: John Morehouse
-- Date: April 2015
-- T: @SQLRUS
-- E: john@jmorehouse.com
-- B: http://sqlrus.com

--THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--IN OTHER WORDS: USE AT YOUR OWN RISK.

--AUTHOR ASSUMES ZERO LIABILITY OR RESPONSIBILITY.

--You may alter this code for your own purposes.
--You may republish altered code as long as you give due credit.
***************************************************************/
DECLARE @dbs AS TABLE ( name sysname, processed BIT)
DECLARE @x INT = 1
DECLARE @sql VARCHAR (2000)
DECLARE @dbName VARCHAR (50)

IF object_id ('tempdb..#results') IS NOT NULL
        BEGIN
               DROP TABLE #results
        END

CREATE TABLE #results ( dbName sysname
                        , principalName VARCHAR (100)
                        , principalType VARCHAR (100)
                        , RoleMembership varchar(100)
                        , defaultSchema VARCHAR (100)
                        , principalSID varbinary(85)
                       )

INSERT INTO @dbs ( name, processed )
        SELECT name, 0 FROM sys.databases
               WHERE database_id > 6
                      AND [state] = 0 --online

WHILE @x <= (SELECT COUNT( 1) FROM @dbs WHERE processed = 0 )
        BEGIN
               SELECT TOP 1 @dbName = name FROM @dbs WHERE processed = 0
               SET @sql =
              'INSERT #results (dbName,principalName, principalType, RoleMembership, defaultSchema, principalSID)
                     SELECT '''+ @dbname +''',dp.name, dp.type_desc, dpr.name,dp.default_schema_name,dp.sid
                     FROM [' + @dbName + '].sys.database_role_members drm
                           INNER JOIN [' + @dbName + '].sys.database_principals dp on drm.member_principal_id = dp.principal_id
                           INNER JOIN [' + @dbName + '].sys.database_principals dpr on drm.role_principal_id = dpr.principal_id
                     WHERE dpr.name in (''db_owner'', ''db_securityadmin'')
                           AND dp.name <> ''dbo'''
        BEGIN TRY
             EXEC (@sql )
        END TRY
        BEGIN CATCH
               SELECT ERROR_LINE () AS 'Error Line'
                      , ERROR_MESSAGE () AS 'Error Message'
                      , ERROR_NUMBER () AS 'Error Number'
                      , @dbName AS 'Database'
        END CATCH

        UPDATE @dbs
        SET processed = 1
        WHERE name = @dbName
END
GO
SELECT * FROM #results
GO

Using this script in conjunction with our Central Management Server I was able to run it on all Production servers.  It quickly allowed me to see where I might have a security concern in each database on each server.

Once I was able to determine what permissions needed to be altered, we could address them accordingly.  Identifying these and making the appropriate changes helped to make our production environment that more secure.

Enjoy!

Posted in T-SQL | Tagged | Leave a comment

2015 Program Committee

speakerachievementLast year, I was very much honored to be selected to deliver a pre-con with my friend brother Chris Shaw (B|T).   I was both excited and nervous at the same time. Regrettably, there were some rough moments in the days that followed the selection announcements, when people found out that their abstract wasn’t selected.   Comments were said, tweets were tweeted and blog posts were written.   Regardless of the kerfuffle that was on the blogsphere,  the experience was amazing and I’m glad that I had the opportunity.

Be Part of The Process

This year I volunteered to be on the Program Committee for the 2015 PASS Summit.  I thought that being a part of the process would help give me better insight on how things really work.  It also gives me a chance to give my feedback and help to improve the process.

I wasn’t sure what to expect, therefore, I told the committee to put me wherever they thought best.   There are a number of options for the committee members to volunteer for, such as review abstracts, speakers, slide decks, or volunteer for the event itself. It really didn’t matter to me which I was on, as long as I was able to help.

They ended up placing me onto an abstract review committee where I have been tasked, along with several others, to review 200+ abstracts for the BI Platform: Architecture, Development & Administration track.  They gave us a deadline of getting all of these reviewed by the end of April.   I like that fact that given that we have multiple people on the team, each abstract is reviewed by several people which allows for a fair and decent review process.

As the review process was spinning up, there was a lot of shifting of people in and out of review teams.  For fairness, if you submitted an abstract for the Summit and are on the Program Committee, you are not allowed to be on the review team for the track you submitted for.  So many members, like me, were moved around.  The Program Committee is very keen on keeping the process as clean as possible.  To me, I think that there is some room for improvement on how these shifts are communicated.  Since I got involved to help better the process, I sent feedback up to the appropriate people.

Remember, We Are family

Last year, like in years past, I saw a number of comments made in various media outlets about who was or wasn’t selected to speak.  Surprisingly, in reading these comments, I noticed that they came off as potentially very hurtful and damaging to those that were selected.   Given that we tout proudly of our SQL Family, I was taken aback.  This year, I ask that our family please take a moment to think about how your message might be conveyed before communicating it.

It has become very clear to me thru this Program Committee process that  just because you are a MVP, MCM, MD, DDS, MSCE, MCSA, MCP, XYZ or your name is “John Doe” it does not guarantee a slot to speak.  The PASS Summit is a community event and as such all community speakers should be treated identical.

This process also ensures that no one can “own” a particular subject.  If someone else wishes to do a session on “ABC” and you are known as the subject matter expert on “ABC”, tough.  Someone else other than you can present on the same topic and/or concepts.  I fully agree that while they can speak on the same subject this does not give them the right to plagiarize content from you or from anybody else.

The Program Committee is striving very hard to make the process as fair and balanced as possible.

You Didn’t Make the Cut

If you submitted for the 2015 PASS Summit and your abstract(s) were not accepted, here’s how I would handle this:

  1. Congratulate those that did make it.  They worked hard on their abstracts and it showed.  Take the higher road.
  2. Thank the Program Committee, the review teams and PASS for doing the best that they could.  In my opinion, in regards of abstract reviews, the process is pretty fair and equitable.
  3. Ask for the comments from your abstracts.  Last year, the Program Committee would send you the comments from the abstract review process.  I am hoping that they will be doing this again for 2015.
  4. If you have feedback, offer it in a constructive manner to the Program Committee.  The Program Committee cannot fix things if they don’t know about it.  I would suggest doing this privately at first and then publicly if you think that it’s warranted.  Starting a flaming war on a blog post or Twitter is NOT the way to handle it.
  5. GunnyHighwayAs Gunny Highway says, “You improvise. You adapt. You overcome”.   Re-work your abstracts, get another set of eyes on it, and submit it again for 2016.  If it’s a session that you’ve never presented before, get it out to your local user group or a SQL Saturday.  Those are great avenues to test the waters for a new presentation. If you want someone to review it, feel free to email it to me at john at jmorehouse dot com.  I’d be more than happy to review it.
  6. Let it go.  Life is way too short to dwell on this.   There will be other days and other conferences.  Keep at it.

Summary

I, myself, submitted three abstracts. I am going into to this knowing I may or may not get selected. If it turns out that I am, it will be grand and I’ll do my best for the community.  I know there are some really fantastic speakers out there that also submitted and it will not be the end of the world if I am not chosen.  Tomorrow will come and the sun will rise.

If and when a rejection does come to PASS (pun intended), I’ll publicly offer up the comments from the review committee on my blog.  Then I will work on improving because whether or not I’m selected does NOT change my passion for teaching and giving back.

I, like you, trust the process to be fair and honest.  I put my faith into the reviewers to do the best that they can.

You should do the same.

Posted in Community, Philosophical | Tagged , , | 2 Comments

T-SQL Tuesday #65 – Failing Over with PowerShell

TSQL2sDay150x150

This month’s T-SQL Tuesday topic is being hosted by Mike Donnelly (B|T).  The topic of choice is a fantastic one, its simply just to teach something new.

So let’s do this.

Our current windows patching process is manual.  On a given weekend every month, we log into the inactive node of the production cluster, install the windows patches and then fail over the cluster.  Making the newly patched server the active node of the cluster.

With any type of manual process, I always look for ways to automated it and make life easier.  In this case I not only wanted to automate,  but also take this off my plate. So I spoke with the Server team, and they agreed to take over the task but only if we automated deployment of the patches . They also requested that first we ensure that the inactive node was the same server for every patching cycle.  That’s where Powershell came in, I used cmdlets  to perform the  needed actions on the cluster.

I envisioned the script working like this:

  1. Prior to patching, the script would execute.
  2. The script would determine the owner of an existing cluster resource.  The owner represents which server the resource is residing on.
  3. If the resource owner isn’t correct, fail over the cluster automatically.  This would put the resource on the correct node of the cluster.
  4. The now correct node is subsequently patched.
  5. The script is called again after patching to move the active node onto the newly patched server.

Sounds straight forward right? Let’s hope so.

A quick Google search delivered a number of cmdlets for Powershell to manage a Windows Cluster.   You can find them here.  After looking at the list, I figured that I needed to look at:

Get-ClusterGroup

Microsoft tell us that this cmdlet is used to “Get information about one or more clustered services or applications (resource groups) in a failover cluster.”  Specifically, we can use this cmdlet to obtain who currently owns a particular cluster resource such as SQL Server.

Here is the basic syntax:
Get-ClusterGroup [-InputObject ] [[-Name] ] [-Cluster ] []

The output of this command will tell me the resource name, the owner node and the state of the resource, whether or not the resource is online or offline.

Move-ClusterGroup

Microsoft tells us that this cmdlet is used to “Move a clustered service or application (a resource group) from one node to another in a failover cluster.”  This is exactly what I need.  I can use this cmdlet to “move” the entire group to another node.

Here is the basic syntax:
Move-ClusterGroup [-InputObject ] [[-Name] ] [[-Node] ] [-Cluster ] [-Wait ] []

Given that I’ll have the cluster information from the Get-ClusterGroup cmdlet, I can call the move-clustergroup accordingly and tell it which node to move the cluster onto.

Development

Before getting started, it’s worth noting that if your local machine doesn’t have the failover clustering tools already installed, you’ll need to get them installed so that you can access the failoverclusters powershell module.  You can find documentation on how to add a role to your machine here.  Your mileage may vary depending on what operating system you are using.

Now that I’ve gotten my machine setup correctly with the appropriate role, I started to test out the cmdlets that I had found.   First I had to determine how to acquire the cluster information.   For purposes of this post, I am going to reference a cluster named “Cluster1″ however the example screen shots will be censored due to security reasons.  I am also interested in the “SQL Server (MSSQLSERVER)” resource group, which is the group the SQL Server cluster resources belong to.  Given those two pieces of information, this is the resulting query:

Get-ClusterGroup -Cluster "Cluster1" -Name "SQL Server (MSSQLServer)" | fl *

get-clustergroup_1

The “|” means that we will pipe the output of the get-clustergroup command into the format-list (fl) cmdlet.   The asterick specifies to display all the properties of the get-clustergroup output.

We can then set the output of this cmdlet to a variable like this:

$obj = get-clustergroup -cluster "Cluster1" -name "SQL Server (MSSQLSERVER)

With the output of the cmdlet into the form of a variable, we can now interrogate the variable for the name of the owner of the cluster group.    We use an IF() statement within Powershell to do the checking.  If the name of the owner is what we want, we then use the move-clustergroup cmdlet to tell the cluster to move to another node.

Given that we only have 2 node clusters, this would fail it over to the other node.  If you had multiple nodes, you can specify which node to move to by including the “-node [nodename]” switch.

#************************************************************
# Purpose: Failover cluster to ensure inactive node is correct
# Author: MorehouseJ
# Created: 4/10/2015
#***********************************************************
import-module failoverclusters
 
$obj = get-clustergroup -cluster Cluster1 -name "Cluster Group"
 
#flip 
If ($obj.ownerNode.Name -ne “server1"){
    get-clustergroup -cluster Cluster1 | move-clustergroup
}

Peer Review

Once I had a basic script constructed, I thought that it would be best to get someone much more versed in Powershell to look things over.  So I turned to my friend Mike Fal (B|T) and asked his advice.

He had some excellent pointers for me.  Namely, if I were to use parameters the script could be easily reused.  Mike also pointed that out I was making two separate calls to the Get-Clustergroup cmdlet which is a waste of resources.  I can call it once, set it to a variable (I was already doing this), and then reuse that variable where appropriate.  Mike also suggested putting in some logging so that some level of tracking was going on.

Based on Mike’s excellent suggestions, this is what the final script looks like:

#************************************************************
# Purpose: Failover cluster to ensure inactive node is correct
# Author: MorehouseJ
# Created: 4/10/2015
#************************************************************
param([string] $cluster
    ,[string] $node
    ,[string] $resource)

$obj = get-clustergroup -cluster $cluster -name $resource
 
#flip 
If ($obj.ownerNode.Name -ne $node){
    Write-Warning "Incorrect node for '$resource', failing over to $node."
    $obj | move-clustergroup -Node $node
    }

Summary

Now we have a complete script that can be easily reused, will move the nodes as needed and help to reduce the amount of manual efforts in the patching process.   This to me demonstrates the power of PowerShell (no pun intended).  I can’t wait to introduce more of these types of thing to our team.

Posted in Administration, TSQL Tuesday | Tagged , , | Leave a comment

Detaching a Database – File Security Gotcha

I’ll admit it, there are times where detaching a database is the best way to get something accomplished, but it is my least favorite way of doing anything.  Using this method can leave yourself open to issues, such as security, if things go wrong.  I think that it’s an important aspect that people miss when detaching a database.

When you detach the database, SQL Server essentially looses all knowledge of it.  If SQL Server is no longer aware of the database, what SQL Server based scripts or tools can you use against that database?  The answer…  None.  Zip.  Nada.  Zero.  In other words, you can’t.  All of those handy DMV or DBCC scripts will not work.

If you find yourself needing to detach a database, one of the things you need to be aware of is how SQL Server adjusts the security of the physical database files.  

While this might not seem like a large issue, trust me it can be.  When it is detached the database file carries with it only the individual who detached the database. Therefore, in theory, only that individual has rights and can reattach it.  SQL Server does this on purpose to ensure the security of the database files.  

To show this, we will:

  1. Create a dummy database
  2. Look at the permissions on the physical files
  3. Detach the database
  4. Re-examine the permissions.

Let’s take a look!

CREATE DATABASE DetachDBDemo
ON PRIMARY
( NAME = N'LogFileDemo', FILENAME = N'C:\Data\DetachDBDemo.mdf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'LogFileDemo_log', FILENAME = N'C:\Data\DetachDBDemo_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO

Now that the database has been created, let’s take a look at those permissions.  Remember that we are talking about the file level permissions, so you’ll need to do this through Windows Explorer.

DetachDB_1

In looking at the MDF file, we can easily see that there are a couple of accounts that have permissions to this file.

  • Owner Rights
  • Administrators
  • MSSQLSERVER

Now let’s detach the database and see what happens.  You can do this through the GUI or through code.   If you want to use the GUI simply right click the database, select Tasks, and then Detach Database.   In the resulting dialog window, select to Drop Connections, click OK.

detachdb_2

detachdb_3

I prefer to do most things using T-SQL scripts, so here is the code to accomplish the same thing.

USE [master]
GO
ALTER DATABASE [DetachDBDemo] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
EXEC master.dbo.sp_detach_db @dbname = N'DetachDBDemo'
GO

Now that the database is detached, what happened with the permissions?detachdb_4

Now the only account that has access to the file is my local user account.  What happens if we attempt to reattach the database with another account? Let’s create another machine account, called DetachDBUserAD.  We will also make it a local administrator as well as add it to the sysadmin server role in SQL Server.

detachdb_5

Here you can see I’ve created the new AD account and I’ve placed it within the Administrators group.  This group should give it unlimited access to the server.

Let us try to attach the database using this new AD account and see what the results are.

I’ve logged into the server as DetachDBUserAD Windows account. Remember that this account is an administrator for both Windows and SQL Server.

CREATE DATABASE DetachDBDemo
	ON PRIMARY 
		(FILENAME = 'C:\Data\DetachDBDemo.MDF'),
		(FILENAME = 'C:\Data\DetachDBDemo.ldf')
	FOR ATTACH
GO

DetachDB_7

Even though the the Windows account is both a local administrator as well as a SQL Server administrator, it still does not have the appropriate permissions to attach the files.

As you can see this issue can be problematic and a nuisance.  There are three ways to get around this issue.

  1. Have the user who detached the database reattach the files.
  2. Have an administrator adjust the permissions on the files so that someone else can attach them.
  3. Do not detach in the first place.

It’s also worth noting that if you detach with a SQL Login (not a machine account) the permissions on the physical files do not get altered.  However, since using SQL logins is not a security best practice, I would not recommend doing this.

If you find yourself in a situation where you have to detach a database remember what happens with file permissions and please make sure that your backup/recovery strategy has been tested.

Enjoy!

Posted in Administration, Internals | Tagged , | 2 Comments

Procedure Creation – Correct Syntax, Wrong Database

Recently, on a Friday evening, we released an update to one of our core applications.   As expected, it went smoothly and no issues were noted.  On the following Monday morning, however, the story took a different turn when end users started reporting issues.  One of the issues encountered was related to a new stored procedure.

In looking at the procedure, we found there were references to a database that existed within the QA environment but did not in Production.   It was assumed that when the procedure was created, it would throw an error because the database referenced didn’t exist.

query_optimizerUnfortunately, it does not work like this.

When the procedure is created, only the syntax of the code is parsed for validity.  As long as the syntax is correct the procedure will be created without an issue.  This can cause an issue because invalid references can slip by.

The real issue will manifest when the procedure is actually called.  The syntax will be checked again by the Parser and then sent to the Algebrizer (or Binding).  The invalid reference doesn’t physically exist.  This causes the Algebrizer the inability to bind and throws an error.

This is also true with altering a procedure that already exists.

We can prove this fairly easily just by creating and/or modifying a procedure with a reference to an object that does not exist.

-- We all have a Scratch Database right?
USE Scratch
GO
-- Create a procedure with invalid reference
CREATE PROCEDURE dbo.ProcCreateDemo
AS
BEGIN
   SET NOCOUNT ON;
   SELECT * FROM DBThatDoesntExist.sys.objects
END
GO
-- Does it exist?
SELECT name, object_id, type, type_desc, create_date from sys.procedures
SELECT OBJECT_DEFINITION(object_id) from sys.procedures

ProcedureCreation_1

We can see that the procedure, even with the invalid reference, was created without issue.  If we look at the object definition of the procedure, we clearly see the invalid reference.

When the procedure is executed the error will manifest.

ProcedureCreation_2

What if we alter an existing procedure?

In this example, I’ll adjust the procedure to take out the invalid reference.

ProcedureCreation_3Then I’ll alter it again to put the invalid reference back in.

ProcedureCreation_4

Note that the procedure was altered successfully both times.  Even though the second ALTER has an invalid reference, the syntax of the command is correct thus the procedure is altered successfully.

We are currently in the process of making adjustments to our release process to ensure that issues like this are caught in the lower environments rather in Production.

References

If you want to learn more about the query optimizer, I would suggest reading this post on how the optimizer works by .

I would also suggest SQL Server Guru, Paul White, as he has a deep dive blog series into the query optimizer.  You can find the first post in his series here.

Enjoy!

Posted in Administration, Internals, T-SQL | Tagged , | Leave a comment

Database Restores – Zeroing Out Log Files

IFI_TwitterQuestionI recently came across a question on the #sqlhelp Twitter feed that I thought was interesting.  The question pertains to whether or not SQL Server zero outs the log file when doing a restore on an existing database.

We already know that even with instant file initialization, the log file is always zeroed out upon creation or file growth.  But what about a restore?  Given the restore is a page by page copy of the data, shouldn’t it just over lay the existing file?

Let’s find out!

In order to test this, we will:

  1. Create a new database and a new table
  2. Insert some data into the new table
  3. Take a backup of the database
  4. Truncate the table
  5. Restore the database
  6. Look at the logs to see what is happening

Let’s get started!

-- Create our database 
CREATE DATABASE LogFileRestoreDemo
GO
USE LogFileRestoreDemo
GO
CREATE TABLE Table1 (id int identity(1,1), number INT)
GO
-- in order to increase the log file, take a backup first to set the log chain.  This will prevent the log file from rolling around.
BACKUP DATABASE LogFileRestoreDemo 
TO DISK = 'C:\Backups\LogFileRestoreDemo_FULL_20150318.bak'
WITH COMPRESSION, STATS = 5, INIT
GO
-- Insert 100K random numbers
INSERT INTO dbo.Table1 (number)
    SELECT ABS(CHECKSUM(NEWID()))
GO 100000
-- Let's check, we have data right?
SELECT TOP 100 ID from dbo.Table1

Now that we have inserted some data, the log file should be a decent size.   You can confirm the file size by looking at the physical LDF file within Windows or you can use this script:

-- Do we have some size to the log file?
SELECT size, size*8 as 'Size (Kb)', name, physical_name, type_desc FROM sys.master_files
WHERE database_ID = db_ID('LogFileRestoreDemo')
GO

LogFileRestoreDemo_1

We know now that the log file has some size to it, let’s take another backup of it and then truncate the table.

BACKUP DATABASE LogFileRestoreDemo 
TO DISK = 'C:\Backups\LogFileRestoreDemo_FULL.bak'
WITH COMPRESSION, STATS = 5, INIT
GO
TRUNCATE TABLE dbo.Table1
GO

Turning on two specific trace flags will help us see some of the magic that takes place during the restore.  Keep in mind that truncating the table is just a way to confirm that the restore worked as expected.  

Trace Flag 3004: this flag will allow you see when SQL Server is doing zeroing out operations.  This works for both the data files as well as log files.

Trace Flag 3605: this flag will output results of certain DBCC commands to the SQL Server log.  It’s very similar to trace flag 3604 which outputs the results of certain DBCC commands to the results window.

Note that both of these trace flags are undocumented so you won’t find specific information about them directly from Microsoft.  However, there are several blogs on the internet that discuss them.

USE [tempdb]
GO
DBCC TRACEON(3004,3605)
GO
RESTORE DATABASE LogFileRestoreDemo
FROM DISK = 'C:\Backups\LogFileRestoreDemo_FULL.bak'
WITH RECOVERY, REPLACE, STATS= 5
GO
DBCC TRACEOFF(3004,3605)
GO

With those trace flags on, we can now look into the SQL Server log and see what it’s doing.

LogFileRestoreDemo_2

We can clearly see that SQL Server zeroed out the pages required for the log file. This shows that even if the database is already present on the server, the log file gets zeroed out when doing a restore.

As we have shown here, if you have to restore a database where the log file has grown to a significant size, SQL Server will have to zero out the pages for it.  This could take a significant amount of time to do so.  If you find yourself in this situation, please make sure that you plan accordingly.

As a final clean up, make sure that the trace flags are turned off.  If they are left on you would potentially put a lot of restore information into the log file.  Let’s also drop the database to clean everything up.

-- This will show you all of the trace flags currently enabled
DBCC TRACESTATUS(-1)
GO
USE tempdb
go
-- Drop the database
DROP DATABASE LogFileRestoreDemo
GO

Enjoy!

 

 

Posted in Administration, Internals | Tagged , | Leave a comment

Quick Script: Finding Columns

Locked Lock, Tom Magliery, http://www.flickr.com/photos/mag3737/1420554128/

Is this secured enough?

For any data professional, data security should be an utmost priority.  Having unsecured data lying about can cause major issues, if discovered.  I’m speaking of software security, more specifically data encryption or data masking.

Finding the instances of unsecured data in an environment can be a pain.  Last year, I had to look for unencrypted data within a non-production environment.  In this case, I was looking for Social Security or also known as Tax Identification Numbers.

Red-Gate offers a tool, SQL Search, which is completely free and completely awesome. I’ve used this tool previously on many occasions .   The problem that I had was that I wanted to look across all servers within an environment utilizing a Central Management Server.  I haven’t been able to find a way to accomplish that with SQL Search so I set out to create my own script.

There are some things to keep in mind about this script,

  1. It makes assumptions that the column is named in a certain pattern.
  2. It uses dynamic SQL.
  3. It uses a WHILE loop.

However, even with these nuances,  the script works pretty well to help find potential sensitive data that might not be encrypted across multiple servers.   Of course, I’m also assuming that encryption is intended to be handled by the database and not some other process.

DISCLAIMER:  While I’ve been able to run it across multiple servers, over several hundred databases without an issue, your mileage will vary.  Please test accordingly.   NEVER run code that you find on the internet against your Production systems without testing first.  You have been warned!

/***************************************************************
-- Author: John Morehouse
-- Date: March 2015
-- T: @SQLRUS
-- E: john@jmorehouse.com
-- B: http://sqlrus.com

--THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--IN OTHER WORDS: USE AT YOUR OWN RISK.

--AUTHOR ASSUMES ZERO LIABILITY OR RESPONSIBILITY.

--You may alter this code for your own purposes.
--You may republish altered code as long as you give due credit.
***************************************************************/
DECLARE @dbs AS TABLE ( name sysname, processed BIT)
DECLARE @x INT = 1
DECLARE @sql NVARCHAR (2000)
DECLARE @dbName VARCHAR (50)

-- check for the table and drop if in existance
IF object_id ('tempdb..#results') IS NOT NULL
BEGIN
DROP TABLE #results
END

-- Create our new table
CREATE TABLE #results ( dbName sysname
, tableName VARCHAR (100)
, schemaName VARCHAR(100)
, columnName VARCHAR (100)
, DataType VARCHAR (100)
, MaxDataLength INT
, theRowCount INT )

-- Fetch a list of databases
INSERT INTO @dbs ( name, processed )
SELECT name, 0 FROM sys.databases
WHERE database_id > 4
AND [state] = 0 --online

-- Loop through the list of databases; use dynamic sql to look for specific columns
WHILE @x <= (SELECT COUNT( 1) FROM @dbs WHERE processed = 0 )
BEGIN
SELECT TOP 1 @dbName = name FROM @dbs WHERE processed = 0

SET @sql =
'INSERT #results (dbName, tableName, schemaName, columnName, DataType, MaxDataLength, theRowCount)
SELECT ''' + @dbName + ''',t.name,s.name,c.name,st.name, c.max_length,p.rows
FROM [' + @dbName + '].sys.columns c
INNER JOIN [' + @dbName + '].[sys].[tables] t ON c.object_id = t.object_id
INNER JOIN [' + @dbName + '].[sys].[schemas] s ON t.schema_id = s.schema_id
INNER JOIN [' + @dbName + '].[sys].[systypes] st ON c.user_type_ID = st.xusertype
INNER JOIN [' + @dbName + '].[sys].[partitions] p ON t.object_id = p.object_ID
INNER JOIN [' + @dbName + '].[sys].[allocation_units] au ON au.container_ID =
CASE
WHEN au.TYPE IN (1,3) THEN p.hobt_id
WHEN au.type IN (2) THEN p.partition_id
END
WHERE (c.name LIKE ''TIN%''
OR c.name LIKE ''TIN_TX%'') -- looking for specific column name
AND c.OBJECT_ID > 100 -- exluded system objects
AND p.index_id IN (0,1) -- Heap or Clustered Index
AND au.type = 1 --IN_ROW_DATA only'

-- used for debugging
--PRINT @sql

BEGIN TRY
EXEC sp_executesql @sql
END TRY
BEGIN CATCH
SELECT ERROR_LINE () AS 'Error Line'
, ERROR_MESSAGE () AS 'Error Message'
, ERROR_NUMBER () AS 'Error Number'
, @dbName AS 'Database'
END CATCH

UPDATE @dbs
SET processed = 1
WHERE name = @dbName
END

-- show us the goodness
SELECT dbName, tableName,schemaName,columnName,DataType,MaxDataLength,theRowCount FROM #results
GO

The script will return a number of things:

  • Table Name
  • Schema Name
  • Column Name
  • Column Data Type
  • Size of the Column
  • Number of rows

Using this script in conjunction with a Central Management Server, I’ve been able to find out where potential sensitive data might be residing.  I used this list of columns to investigate whether or not it contained sensitive data.  If it did, I would either mask it or encrypt it, depending on our company policies.

I haven’t run into any issues with running the script in Production, however, I would again advise caution.  Your mileage will vary.  Test accordingly!

Enjoy!

Posted in T-SQL | Tagged , | Leave a comment

SQL Saturday 381 – Richmond, VA

sqlsat381_webI love presenting and I love visiting new places so on March 21st, 2015 I’ll be speaking at SQL Saturday #381 located in Richmond, Virginia.  This will be my first time presenting at their SQL Saturday and I’m really looking forward to the event.  I’ll get to hang out with some awesome #sqlfamily members. Hopefully I’ll get to get to see a some of the surroundings as well.

One really cool thing about this event is that they are offering you the chance to get a Microsoft Technology Associate (MTA) certification for FREE!  On Friday March 20th, 2015 they will be hosting a half-day lab & testing session.  If you are interested in obtaining a certification, this would be a fantastic way to get one at no cost to you.  Make sure to check out the registration page for further details.

If you have nothing better to do on Saturday, March 21, 2015, come on out and get your learn on.  The schedule for this event is quite impressive and there will be a TON of things to learn.  Come, join us and learn something.

Oh yeah, I’m speaking!

Social Networking Is Alive!

SocialMediaIsAlive!Social media is every where these days and if you’re not on the train, you could be missing out.  This one is about how you can use social media to improve not only your professional life, but your personal one as well.  It isn’t geared to specifically geeks (totally non-technical), but to anybody who wishes to improve their online presence through the use of social networking.  It’s a fun session and I hope that the attendee’s enjoy it!!

HINT: I will show you how to get free SQL Server help from one of the popular social networks.  How would you like to tell your management that you solved a real world enterprise level issue with the help of social media?  For free?

Yes, that has actually happened.

You might even see reference to a popular YouTube video.

Free Training

Can you really pass up on free training?  Ok, even with lunch it’s only $10!  So if you’re still on the fence on attending, get off the fence, register, come on out and learn something.  Meet some new people.  Say hello to the speakers (we don’t bite).  Grab some swag and a fantastic lunch.   Register now!!

Get your learn on!  Expand your horizons and break out of the ordinary.  You know you want to.

Why are you still here?  Go.  Register.  Do it now! See you in Richmond!!

Seriously.  GO!

Posted in Community, Presenting, SQL Saturday | Tagged , , | Leave a comment

Quick Script: Dropping Indexes

Guinness

If Only My Doctor Ordered This..

Sometimes a quick script is just what the doctor ordered.

I recently had the task of packaging up a database to ship it offsite.  This database had some size to it, approximately 200GB with 6 tables.  While looking at the tables, I noticed that though they were relatively narrow, each had ~22 million rows in them. In my environment this wasn’t wild or out of the ordinary, however, I did think that the disk space usage was a little out of whack given what I was seeing.  So, I set out to see if I could make the database smaller so that it was easier to transport.

I discovered that each table had anywhere from 7-15 non-clustered indexes on them.  As we know, a non-clustered index is a partial copy of the data, so it was extra bloat that we didn’t need.  I knew those that would be consuming the data would pull in every table in its entirety, so I wasn’t overly concerned about them having index scans versus index seeks on the data.

Each table, however, did have a clustered index on it so I knew that I wanted to leave it in place.  It was only the non-clustered indexes causing extra weight that I didn’t need.  So I wrote a quick T-SQL script to help drop that weight.  While this script isn’t overly fancy or complex, it turned out to be handy and gave me the ability to manually control dropping all of the non-clustered indexes in the database.

DISCLAIMER: Do not run code you find on the internet in your production environment without testing it very thoroughly.  Period.  

/***************************************************************
-- Author: John Morehouse
-- Date: February 2015
-- T: @SQLRUS
-- E: john@jmorehouse.com
-- B: http://sqlrus.com

--THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--IN OTHER WORDS: USE AT YOUR OWN RISK.

--AUTHOR ASSUMES ZERO LIABILITY OR RESPONSIBILITY. 

--You may alter this code for your own purposes. 
--You may republish altered code as long as you give due credit.

--Updated:
--20150316: MorehouseJ: Added in join to sys.tables based off feedback from JohnW to account for FILESTREAM tables.  Also removed ORDER BY for the KeyConstraintID column.  
***************************************************************/
USE [AdventureWorks];
GO
SELECT CASE 
    WHEN kc.object_id IS NOT NULL THEN 'ALTER TABLE ' + QUOTENAME(ss.name) + '.' + QUOTENAME(so.name) + ' DROP CONSTRAINT ' + si.name + ';'
    ELSE 'DROP INDEX ' + si.name + ' ON ' + QUOTENAME(ss.name) + '.' + QUOTENAME(so.name) + ';'
    END as 'Drop Command'
    , si.object_id as 'Object ID'
    , si.name as 'Index Name'
    , si.index_id as 'Index ID'
    , si.type_desc as 'Index Type'
    , object_name(so.object_id) as 'Table Name'
    , si.is_unique
    , si.is_primary_key
 FROM sys.indexes si
    INNER JOIN sys.objects so ON si.object_id = so.object_ID
    INNER JOIN sys.schemas ss ON so.schema_ID = ss.schema_id
    LEFT JOIN sys.key_constraints kc on si.name = kc.name
    INNER JOIN sys.tables st ON si.object_id = st.object_ID
 WHERE si.object_id > 100             -- system objects
    AND si.type >= 2                  -- anything not a clustered index
    AND so.is_ms_shipped = 0         -- ignore MS specific objects
    AND NOT EXISTS (SELECT name FROM sys.xml_indexes WHERE xml_index_type > 0 AND name = si.name) 
    AND st.filestream_data_space_id IS NULL
 ORDER BY [Index Type]
GO

It is worth noting that if the index was created from a PRIMARY KEY or UNIQUE constraint, it cannot be dropped using a simple DROP statement.  You have to use the ‘ALTER TABLE … DROP CONSTRAINT’ method to drop the index.  The script I have provided will handle that for you as well as handle XML indexes and spatial indexes.

For a given database, it generates the ‘DROP’ statements for all of the non-clustered indexes.   It’s that simple.  It also includes some additional meta-data, like the schema, about the index.  Once the query is complete, all you have to do is just copy out the command in the first column, labeled ‘Drop Command’. Take that statement and paste it into another SSMS window and execute it as you wish.  You can execute the entire thing or perform it line by line.

Of course, I’d be remiss if I didn’t say it again, please DO NOT execute that resulting command on a production database.  That command could have severe impact to your production systems.

Yes, I could have probably written a PoSH script to do this work.  Well, Ok.  I could have asked my good friend Mike Fal (B|T) to write a PoSH script for me.  He’s a wizard at it.   Yes, I too could have written a cursor/loop to go do the work for me, but I didn’t want to.  I like quick and dirty scripts such as this one.  It allows me to do what I need to do with a relatively short amount of work. This script helped to get the database size down significantly (of course I had to shrink afterward *GASP*) and then it was much easier to transport once that weight was dropped.  The consumers of the data were also very happy as this also helped them to save space on their servers

Enjoy!

 

 

Posted in Indexes | Tagged | 4 Comments

Public Speaking – The Small Things

roughRoad

Rough Road Ahead!

Public Speaking is hard and is sometimes a rough road to go down.

There isn’t any other way to put it.  It takes work and won’t always go smoothly.  Not only do you have to generate a topic that you think will be interesting to attendees, but you also have to put a presentation together, practice it, fine tune it and then finally deliver it.

Keeping in mind that what can go wrong will go wrong, helps. As you stand to deliver your presentation, you realize that you’ve forgotten something. Perhaps you suddenly have the worst case of cotton mouth possible.  Or quite possibly, you will hear your stomach rumbling and maybe, just maybe, that burrito you ate for lunch will remind you that it wasn’t that good of an idea. Things like these are bound to happen.

Take it from someone who speaks, I can say that we’ve all been there.  From my experience over the past couple of years, I’ve put together a list of things that I carry with me to ensure that I avoid those rough roads.  Yes, things out of your control will happen.  Demo gods will raise their fists and smash your demo’s to pieces.  The bulb in the projector will suddenly reach the end of its life just as you start your presentation.  Sometimes you just can’t help those situations, even with the best of preparations.  So know this, especially if you are just starting out as a speaker,

1.  Things will happen that you cannot control.  

2.  You cannot fix rule #1.

However, it isn’t the end of the world.  Don’t fret over the bumps during your presentation. Learn from the experience, adjust accordingly and keep on speaking.  

Good?  Good!!

Now let’s have the list!

Technology

  1. theKeningtonClicker

    The Kensington. Clicker Among Clickers

    Presentation Clicker.  Get a good presentation clicker. You can spend a lot of money on one, but I recommend a mid-tier model since it will most likely do what you’ll need and not cost a bundle.  Some clickers are more fancy and have built in ways to keep your presentations on time but those options a matter of preference which I do not see a need for. Having a clicker allows you to have freedom to walk around during your presentation and not be tied to your laptop

  2. Spare batteries.  I always have a set of spare batteries for my gadgets,  Usually a 4-pack of AA and a 4-pack of AAA’s.  I have gadgets that use both so keeping both on hand is important to me.  Take a look at your arsenal of tech and see what you may or may not need batteries for.
  3. Backups.  Get a USB thumb drive and backup your slide deck, demo’s and any virtual machines that you might need.  I also back up my slides & demos to Dropbox as a secondary back up. Any type of cloud provider would also work for this.  In the event your laptop goes south for the winter (it will happen at some point), with a good backup in place you can work around it.
  4. Secondary Presentation Device.  If you have an Apple iPad or Microsoft Surface or a secondary laptop, configure it at the very least be a presentation device.  Personally, I have an iPad and can use it to present the slide deck if need be. I am told on a Surface, SQL server can be installed for demos but I’ve never done it. When you have laptop issues you can save the day by presenting from the secondary device, so make sure you have one available.
  5. VGA/Display adapters.  Carry your own.  Plain and simple.  Don’t assume that every place will have the proper adapters for your model laptop/device to connect to the projector.  This also goes for your backup device.  I’ve found that having just a VGA adapter is a safe bet, as that’s still fairly standard.  You can also get HDMI adapters, however I’ve yet to run into a projector that won’t take a VGA connection. I carry a VGA adapter for both my laptop and my iPad so that both of my devices will be usable.
  6. Travel mouse.  I have a smaller travel mouse that I use for presentations Having a mouse with me ensures I don’t have use the track pad if I don’t want to.  I find the mouse allows me to have a smoother experience while doing demos so I prefer it over the track pad.  This is my personal preference so you can adjust as needed.
  7. Watch.  If you are someone who doesn’t regularly wear a watch, I suggest you get some type of time keeping device that you can place on the desk and can see from a distance. Usually, I try to visit the room where I’ll be speaking prior to my session so that I can get a feel for the lay of the land.  This is also a good time to check to see if there is a clock within the room and make note of its location. Remember in your preparation make sure that you practice using the watch so that you know the timings of your presentation.
  8. Notes/Feedback.   Determine a good way (for you) to take notes during your presentation when needed.  As people ask you questions you’ll find things that you’ll potentially need to research and want to jot down to get back to them on.  Having a quick way to write down a note or question to remind yourself is handy.  Some presenters will use the note section of the slide deck when in Presenter mode while others will use a simple pad of paper & pen.

Personal Items

  1. Water.   Make sure you have bottled water with you.  Large amounts of talking dries my mouth out and having cotton mouth while presenting is not fun.  I usually carry at least 1 if not 2 bottles of water with me when I present.  Keep in mind, however, I do monitor my liquid consumption leading up to the my session so that I don’t have a sudden need to use the facilities in the middle of my session
  2. Hard candy/cough drops.  Some speakers would frown on this, but if your throat starts to dry out, you can use a hard candy or cough drops to help with that.  Sometimes water just isn’t enough. I have found that when traveling to locations where the elevation differs greatly from my normal surroundings, my throat tends to be more sensitive to those conditions.
  3. Hand sanitizer.  Pretty simple, keep the germs away.  Most attendees are polite and some will want to shake your hand and tell you “Thank You”.  This just helps to keep colds at bay.
  4. Pain Reliever.  Aspirin/Motrin/Advil/Ibproferin/etc, whatever your choice is.  Keep a small bottle with you so that you can take some if you feel a headache or something else coming on.
  5. Pepto Bismol/Tums.
    1. When I travel to another location, I love to try the local cuisine and sometimes that’s a bad decision for me.  Food can sometimes sneak up on you and if you don’t have anything to help counteract that, you could be in trouble.   Personally, I recommend the chewable versions of Pepto Bismol as it’s easy to carry through the airports.

Storage

Now that you have the list, how do you keep it organized? I’ve tried a couple of different ways to organize things within my back pack, however I’ve found using two separate cases works best for me.  Of course, your mileage may vary and you might have to try a couple of variations until you find one that works for you.

File Feb 18, 18 07 37

The one case on the left is used for most of my tech.  It holds my VGA cables, charging cables, batteries of various sorts, power adapters, my MiFi along with various odds & ends.

I bought the case from Amazon for less than $20.  So far it has served me well and fits nicely in my back pack.

personalStorageThe case on the right is what I use to keep those personal items organized.  Before traveling, I make a stop at the local Target and visit their travel section.  You can get all of the personal items listed above (except the water) in travel sizes This allows them to fit nicely within the case.

Once I have my supplies, I will usually place the case somewhere in my back pack that is easily accessible.  There are times when traveling that some of these items can come in useful and I want to be able to get them easily.

Will this solve everything?

Nope. It might not even come close.  Then again, something on this list just might help to give you a detour around that rough road and you’ll give the best presentation ever because you planned ahead.  When you set forth to put do a public presentation, don’t forget to think about the small things.  Hopefully my list will help you get started!

Do you already have a list and have something that I don’t have on mine?  I’d like to know!

 

Posted in General Conversations, Presenting | Tagged , | 3 Comments