I’m Speaking! Omaha SQL Server/BI User Group

omaha_skyline_400x400Well, it has finally happened. In the 7+ years that I helped to run the Omaha SQL/BI User Group, I never once gave a presentation of my own.  Of course I spoke, but it was usually about house keeping items for the group or helping to facilitate a “Bring Us Your Problems” session.  This is about to change.

As luck would have it, I’m headed back to Omaha for a vacation.  By sheer coincidence, my trip coincides with the July 2015 meeting of the Omaha SQL/BI User group.  I could not have planned it better.

I am excited to again speak at the meeting, only this time, I’m giving my own presentation.

SQL Server Databaseology: A Deep Dive Into Database Internals

Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge that will allow you to build better and faster databases.

Logistics

The meeting will be held on Wednesday, July 8th, 2015 @ 6PM at Farm Credit Services of America.  Farm Credit Services of America will be our sponsor for the evening.

Please make sure to RSVP so that there is an accurate head count for food.

See you there!

 

Posted in General Conversations | Leave a comment

T-SQL Tuesday #67 – Extended Events

TSQL2sDay150x150This month’s T-SQL Tuesday topic is about extended events and comes from Jes Borland (B|T).

Like in most environments, we utilize a mixture of Windows logins and SQL logins.  Over the years, the amount of SQL logins we use has grown.  Knowing that it is a better practice to utilize windows accounts, we now want reduce the number of SQL logins.

So how do we do that? You could just disable the accounts and see what squawks, right?  The other alternative, which is what we choose, is to enable several types monitoring, which will allow us to see which accounts are being actively used.  Once we have a defined list, we can then weed out the inactive SQL logins as well as move the active ones to a windows account.

There are a couple of ways to do this that I’m aware of.

  1. Profiler
  2. Traces
  3. Login Triggers
  4. Extended Events
  5. Audits
  6. Others?

Each of these methods have their pros and cons, so I would advise that you consider each one carefully.

In full disclosure, I do not have much experience with extended events so I wasn’t even certain that these would work for what I wanted.  Thankfully, it turns out it did!

Brief History on Extended Events

In the beginning, in my opinion, extended events used to be a bear to utilize.  In 2008 R2 and older, there wasn’t an easy way to configure and utilize them.  There were ways around this, by using things such as add-in tools.

xe_1

SQL Server 2014

With the release of SQL Server 2012, there is a GUI interface that allows you to create and configure an extended events session.

As you can see on the left, just expand the Management tree within SSMS, Extended Events and then Sessions.

By default, you will notice you have a system_health session already running.  This is very much like the black box default trace that started to show up in SQL Server 2005.

I will let you experiment on creating your own extended events session.  There are a number of resources on the internet that will help you accomplish this.

TL;DR

Instead of creating a very long blog post about how to create an extended events session, I have provided the script that I generated.

DISCLAIMER:  Please test accordingly. NEVER run code that you find on the internet against your Production systems without testing first. You have been warned!

IF EXISTS(SELECT 1 from sys.dm_xe_sessions where name = 'MonitorSQLLogins')
    BEGIN
	   DROP EVENT SESSION [MonitorSQLLogins] ON SERVER
    END
GO
CREATE EVENT SESSION [MonitorSQLLogins] ON SERVER 
ADD EVENT sqlserver.connectivity_ring_buffer_recorded(
    ACTION(sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.context_info,sqlserver.server_principal_name,sqlserver.session_id)
    WHERE ([sqlserver].[session_nt_user]=N'NULL')),
ADD EVENT sqlserver.login(
    ACTION(sqlserver.session_id,sqlserver.session_nt_username,sqlserver.session_server_principal_name)
    WHERE ([sqlserver].[session_nt_user]=N'')) 
ADD TARGET package0.event_file(SET filename=N'MonitorSQLLogins')
ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.login',source=N'sqlserver.session_server_principal_name')
WITH (MAX_MEMORY=4096 KB
,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)
GO
ALTER EVENT SESSION MonitorSQLLogins
ON SERVER STATE = START
GO

There are a couple of things to note.

First, I did not want to capture all the logins, only the SQL logins.  In order to do that, I put a filter (WHERE clause) on the actions limited it to the session_nt_user of NULL or blank.  This effectively limits the list SQL logins, because a windows login would have a property value in this field.

Second, I am using a histogram target to keep track of the logins.  The histogram will aggregate the number of times a login occurred.  You could also use the ring_buffer or anevent_file target.  Given that I do not care about when the login occurred but just that it did, a histogram seemed more appropriate.

Testing

In order to test, I just created a SQL login named ‘LoginTest1’.   Once the account was created, I just simply started a new query window and authenticated using those credentials.

The Results

Once the extended event session is capturing data, the next hurdle is how to query it.  You can watch the live data stream or you can use this query:

SELECT name, target_name, CAST(st.target_data AS xml)
FROM sys.dm_xe_session_targets AS st
JOIN sys.dm_xe_sessions AS s
ON (s.address = st.event_session_address)
WHERE s.name = 'MonitorSQLLogins'

This query will return the target_data from the session as XML as shown below:

xe_3

Bxe_4y clicking on the hyper link that is returned, you can see a better representation on what accounts are logging into your server.

In this case, the LoginTest1 user account has logged in twice.

One thing to note is the number of “buckets” shown here.  This represents the number of groups that will be tracked by the histogram.  In my case, since I am grouping by login, if there were more than 256 (which is the default) logins, the session would stop logging. Meaning any other logins, would not be captured regardless if it is a new group or not.  Given that I do not have more than 256 SQL logins on my server, this was not an issue for me.

You can adjust this value either by using the GUI when the session is created or by adjusting this line in the above script:

ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.login',slots=(512),source=N'sqlserver.session_server_principal_name')
xe_5

If I wanted 512 buckets

Summary

Now that I’ve got a method to track SQL Logins, I can periodically check the histogram to see what accounts are logging into the instance.

Watch for a future blog post on how I’ll develop a process to extract this data into a more usable format!

If you have any interest in hosting one of these events, you can contact the founder, Adam Machanic.  He is always on the lookout for someone to host.  The only requirement is that you have participated in at least 2 T-SQL Tuesday events and you have an idea for a topic.

 

Posted in Administration | Tagged , , , | Leave a comment

Why NULL Placement Matters

showmethebenjamins

Not those Benjamin’s

Benjamin Franklin once said, “A place for everything and everything in it’s place“.  While I assume that he was not thinking about NULL values in a SQL Server table, the premise is still correct.  Some data professionals believe that NULL values do not belong in any table design.  However, the reality is that they exist and I do not see that changing anytime soon.

Several years ago, I learned that when a table is designed, putting columns that will have the majority of NULL values at the end of the table will help with performance.   To be honest, at that time I took that lesson at face value.  I never really understood why that would make a difference. Recently, I ran across my notes from that lesson and decided to investigate it further.

Let’s examine a situation where a table has a variety of NULL placements.  We will create a table, NullPlacementDemo, that has five columns total.  An identity column along with four variable character length columns.

USE Scratch
GO
IF OBJECT_ID('dbo.NullPlacementDemo') IS NOT NULL
	BEGIN
		DROP TABLE NullPlacementDemo
	END
GO
CREATE TABLE NullPlacementDemo (id int identity(1,1), col1 varchar(10), col2 varchar(10), col3 varchar(10),col4 varchar(10))
GO

Now that the table structure is created, let’s put some data in it.  I will insert three records.

  1. This record will have values in columns 1 & 2, with 3 & 4 NULL
  2. This record will have values in columns 1 & 3, with 2 & 4 NULL
  3. This record will have values in columns 1 & 4, with 2 & 3 NULL
INSERT INTO NullPlacementDemo (col1, col2)
	SELECT REPLICATE('a',10), REPLICATE('b',10)
GO
INSERT INTO NullPlacementDemo (col1, col3)
	SELECT REPLICATE('a',10), REPLICATE('b',10)
GO
INSERT INTO NullPlacementDemo (col1, col4)
	SELECT REPLICATE('a',10), REPLICATE('b',10)
GO

Note, that in order to make calculations easier, I am filling the variable length columns to their maximum capacity of ten characters.  Also, note that the amount of data inserted is the same on all three records.

Anatomy of a Record

In order to understand why NULL placement matters, we need to investigate the anatomy of the record itself.  I highly recommend the post by Paul Randal, “Inside the Storage Engine: Anatomy of a Record”.  This post does an excellent job of telling us about all of the crucial elements that comprises each record.

NullPlacementMatters_3Taking the information from Paul’s blog we can deduce that the size for each record should be 37 bytes.  Remember that we inserted the exact same amount of data for all three records.

Regarding the calculations on the left, the tag bytes, NULL bitmap location, number of columns in NULL bitmap, and variable column count are fixed values.  The others are determined on how the table is configured:

  • Fixed Data Length – the total size in bytes of the fixed data length.  We have an integer data type for the identity, so four bytes
  • Bit for every column – ‘N’ represents the total number of columns. 5/8 = 0.625 but you cannot have a partial byte, so we round up to one byte.
  • Variable Column Offset – ‘N’ represents the total number of columns that have a value. We only inserted two values, therefore four bytes

In order to confirm this, we can use two of my favorite undocumented tools, DBCC IND and DBCC PAGE to look at the internal structure of the records.

Remember that we need to turn on trace flag 3604 to get the output of the DBCC Page command to send the results to the SSMS window.

DBCC IND(Scratch,NullPlacementDemo,1)
GO
DBCC TRACEON(3604)
DBCC PAGE('Scratch',1,300,1)
DBCC TRACEOFF(3604)
GO

NullPlacementMatters_1

The results of DBCC Ind tells me that I need to look at page ID 300 to see the contents of the page.  The page ID will potentially different on your system.

Once we determine which page to look at, we’ll use the DBCC Page command to look at the page itself.  For simplicity, we are going to use the dump style of one.  This will display the size for each corresponding record.

NullPlacementMatters_2

We confirmed that the first record is 37 bytes, which matches the calculation above.

Wait.  Hold the phone!

The other record sizes are different! How can that be?  We inserted the EXACT same data each time!  You will recall that in the first record both values were in conjunction to each other.  The second record had a NULL value between them and the third record had two NULL values between them.   The only thing different between all three records was the NULL placement.

The Truth Shall Set You Free

The truth is how SQL Server processes the values for each column along with values in the NULL bitmap.  Paul’s article from above teaches us, among other things:

  • The NULL bitmap keeps track whether or not the column has a NULL value, even if it’s non-nullable
  • The variable column offset stores the end of each column value

Let’s look closer at the NULL bitmap, specifically for the second record.  Remember that it’s zero based, thus the second row is in slot one. Using the output of DBCC Page, we can see the NULL Bitmap byte.   This is in hexadecimal.  NullPlacementMatters_4

If we convert the value of fourteen to binary, we get the value 10100.   Zero indicates that the column is not null.  Reading right to left, this value tells SQL Server that:

  • the first two columns are not null (zero) – 10100
  • the third column is null – 10100
  • the fourth column is not null – 10100
  • the fifth column is null – 10100

Using this information in conjunction with the variable column offset array it has to keep track of that third column, which is NULL.  We recall that the variable column offset array stores the end location of the column value.   This allows SQL Server to know where the start of the next column is as well as provide an easy way to determine how big the value is.

Even though the value is NULL in the third column, it has to allocate two bytes in order to track where that column ends.  If it did not allocate these two bytes, it wouldn’t know where the fourth column started since, according to the NULL bitmap, is not null.

We can further see this by again looking at the DBCC Page output:

NullPlacementMatters_5

The underlined three in the above image represents the number of variable columns that are present.  This means that there are three – two byte entries for these columns.  They are

  • Col1 – 0x001d (not null)
  • Col2 – 0x001d (Null)
  • Col3 – 0x0027 (not null)

These are highlighted in the above image.  If we convert 0x001d to decimal, this is 29.  If we count over 29 bytes from the beginning of the record, we end up at the last ’61’, as shown below:

NullPlacementMatters_6

This represents the end of the first column.   The second column, which is NULL, has a length of 0 so also ends at the same location, 0x001d.

The third two byte entry, 0x0027 is 39 in decimal.  This would correlate to the last ’62’ in the output above.   You’ll also notice that this is the same as the length of the record.

Since SQL Server has to store these extra two bytes for this NULL value, the size is two bytes larger, 39.  Given that the third record has two NULL values between columns one and four, the record size is four bytes larger than the first record, thus 41 bytes in total.

Summary

At the end of the day, SQL Server is all about how many bytes it has to read/write to get the job done.  If you have a table that has 100 columns with a mixture of NULLable columns and multi-million records, could this impact you?  Possibly.  If the table design calls for nullable columns and you organize them appropriately, you will help to reduce the amount of bytes that SQL Server will have to handle.  A number of factors may affect your milage of course, but reducing the amount of I/O is never a bad thing.

Posted in Internals | Tagged | 7 Comments

SQL Server 2008 R2 SP3 Install Errors

Whenever it is time to install a service pack every DBA silently hopes that nothing will go wrong.  I would surmise that, in most cases, installing the service pack is a non-event.  No flashing lights, no fireworks, no applause and no major hiccups.  This post is not about one of those times.

The Install

Several weeks ago I attempted to install SQL Server 2008 R2 Service Pack 3 on one of our production instances.   The server was already at SP2 so it was to be a routine install. However, as the installer launched, this error appeared:

sp3setuperror_1

The issue stems from missing files in the installer cache.  When applications, including SQL Server, are installed certain files are placed within the installer cache.  These files are subsequently used when installing service packs and cumulative updates.  For some unknown reason b40c768.msi was missing.

My initial search lead me to a Microsoft Support page title “How to restore the missing Windows Installer cache files and resolve problems that occur during a SQL Server update“.  Perfect.   The blog post has a lot of detail in it and explains the issue fairly well.  It suggests two methods to help resolve the issue, one being a script and the other manual intervention.   I opted for the former, the script.

The Script

The instructions are straight forward.

  1. Download the script to a folder on the affected server, IE: c:\temp
  2. Open an elevated command prompt
  3. Navigate to the folder where the script is located
  4. Execute this command: Cscript FindSQLInstalls.vbs %computername%_sql_install_details.txt

The script will collect information about what packages may or may not be missing from the installer cache.  Note: that the output of the script will put a text file in the same folder as the script itself.  It will also overwrite any files that might already be present.  If you wish to run this multiple times, I would suggest changing the file names accordingly.

The output of the script looks like this:

sp3_script_output

Click to enlarge image

You will see that in the above example, it’s shouting at me stating that the MSI file doesn’t exist in the path D:\x64\set\sql_engine_core_inst_msi\.  This is because the install media isn’t located within the CD-ROM drive.  This was expected.

You will further notice that it says that the installer cache file, b40c756.msi was located and no further actions are needed.  If this particular cache file was missing the output would notify me.  I would utilize the command it provides to copy the missing msi file from the install media to the location it specifies.

Now that I knew where the missing file might be located, I set out to go find it.  In this particular case, I located the file on the Service Pack 2 install media.  If you weren’t aware, those service packs (and cumulative updates) are self-extracting files.  You can manually extract the files to a location of your choice so that you can locate the file you need.

Once I was able to locate the missing MSI file from the extract contents, I was able to successfully copy it utilizing the command that is provided in the output file.

At this point, it was rinse and repeat.  I would follow the same steps for each file that was missing from the installer cache.  All in all, there were 3 or 4 files that were missing.  Once I had all of the files replaced, I re-ran the SP3 installer file.

Then this happened.

sp3_error

Looks like we have another issue.  This error was much more cryptic and not very useful.  So of course, I went back to Google and did some more research.  In addition to researching, I also checked the event view on the server.  Nothing of interest was there.   I also thought to check the bootstrap log.   Here I found an interesting error:

sp3_error_2

Click to enlarge image.

You will note that the error happened right after it started to look at sql_as_CPU64, or b40c73a.msi (two lines above the censored section). I thought that I had mis-copied this particular file.  I tried to copy it over again.  That did not solve the issue.  I still got the same error.

I eventually turned to a forum I frequent, Ask SQL Server Central.   You can read the question that I posted here.  Unfortunately, I did not get any responses so that meant that I had to continue to try to fix it.  I really did not want to call Microsoft Support but I would if I had to.

TL;DR

The answer lies within the bootstrap log.  The error isn’t referring to the line above it as I had thought.  The “sql_as_CPU64.msi” file was processed just fine.  The error was occurring on the file AFTER that one.   However, you will notice that the file it is trying to consume is not listed. Using the bootstrap log, I was able to comb back through it looking for all of the missing installer cache files.  I finally found one that wasn’t accounted for within the log file, so that had to be the one.

Turns out, when I copied the MSI file from the source media I mistaken copied an MSP file, which is distinctly different.  I found the correct file and copied it back over to the server.

A relaunch of the installer lead to…..Success!

Success

Resources

Here are some additional resources that I found.  These are not listed in any particular order.

Lessons Learned

I learned a number of lessons throughout this process.

  1. Pay attention to the details.  There is a distinct difference from a MSI and MSP file.  The former is an installer file.  The latter is a MSI patch file.
  2. Keep note of things tried.  I found that this issue was a combination of several different things and I should have kept better documentation on each one.  I found that I would repeat steps that I had already tried because I did not document them well enough.
  3. The internet is your friend.
  4. Do not be shy to use the resources you have available to you, IE: forums.  They are there to help you.
Posted in Administration | Tagged | Leave a comment

Why not jump in and help?

help!If you know me, you know it is in my nature to help.  Whether it’s helping a friend move, helping with a technical issue or offering a shoulder to lean on, I am there.  I’m usually one of those “go to” kind of guys when you need that helping hand.  I find helping, in any context, to be very gratifying.

Helping Others

As a speaker, I often try to get people to start speaking.   It can be rewarding, both personally and professionally.   On more than one occasion, I have heard the excuse to not speak as “I have nothing to say that hasn’t already been said”.  I will often disagree and argue that your story, regardless of the topic, is worth being told.  Whether this is through a presentation or a blog post, get out there and let it be heard.

I think that this excuse also holds true for helping others in the SQL community.  I think sometimes people do not want to help out because of the following reasons

  1. They don’t think they are qualified
  2. They do not want to get involved with some else’s problem
  3. They might be wrong

Let’s look at all three of those.

Not Qualified to answer.  Face it, we are not heart surgeons and we have no place being in the operating room helping perform surgery.  That’s not what we, as IT professionals, do on a daily basis.  You are, however, more than qualified to help out other IT professionals since they are doing the same type of work.  If I had to guess, even heart surgeons seek out advice of their peers to solve problems and make themselves better surgeons.  Take ownership of your skills and offer to at least listen to others that might be having problems.  If you have a thought about their particular problem, offer it up.  Let them decide whether or not to take your advice.

Getting involved with someone’s problem.  This can be tricky one to overcome, especially over the internet.  Yes, helping out means that you have to get involved and potentially get your hands dirty.  Knowing when to jump in and help can be tough and sometimes there are even individuals that are difficult to help. You will likely run across a few here and there.  However, there isn’t anything wrong with saying to them, “I can’t help you any further, reach out to so-and-so consulting and they can help”.   This allows them to find other avenues of help.  Know when you have reached your limit and when you do, communicate that.

Being wrong.  I’ll be honest.  I’m wrong from time to time.  I do not shy away from it rather, I own up to it.  If I’m wrong in a public forum, I will usually say “I stand corrected and thanks to Billy Bob for setting me straight”.  There is absolutely nothing wrong in being wrong.  If you make a mistake, own up to it, learn from it, and move on.  The world will not stop if you are wrong.  The sun will rise in the morning and new problems will appear.

Being the Expert

I rarely claim that I am an expert in anything.  I am always striving to learn new things in all facets of life.  I am a firm believer that there is always more than one way to skin a cat.  I find that this is especially true when it comes to SQL Server.

When you are helping others you must keep this in mind.  They, or others that might be helping, might not know of a preferred way to accomplish something.  I believe that it is our job to educate not only the individual asking for help but any others that might be involved as well.

We should also strive to do this in a professional manner.  We should not call them out, regardless of the medium, for their lack of knowledge or for the advice that they give.  We should rather applaud them for trying to help others.   We certainly can tell them that they are wrong.  However, we must provide the reasons behind their incorrect advice so that they too will learn from the experience.

Summary

Whether you are giving a presentation or helping others on some technical forum, helping can be very rewarding.  I assure you that you have the knowledge, skills, and drive to help others.

If you’re the expert, take a moment to remember that we are all here for the same reason.

Note: No cats were harmed in writing this blog post

Posted in Philosophical | Tagged | Leave a comment

Database Growth

Last week I got into a conversation with a colleague about how SQL Server grows the database.  We were both fairly certain that the database would grow in specific sized increments.  We further assumed that the increment size would match the growth setting on the database.  However, we wanted to see it for ourselves.  Thus an experiment is born!

Hypothesis

When a database is forced to grow within SQL Server, it will do so in predefined increments rather than one large growth.  For example, if a transaction requires 100MB of additional disk space and the growth setting is equal to 1MB, the database would grow 100 times at 1MB each time rather than 100MB for a single time.

To test out the hypothesis we will do the following:

  1. Create a dummy database
  2. Evaluate the current grow setting of the database
  3. Evaluate the current size of the database
  4. Generate a transaction that will force the database to grow
  5. Evaluate the number of growth events
  6. Evaluate the final size of the database
  7. Compare the results

For simplicity, we will only be examining the data file of the database.  The log file will have the same behavior however to make it easy we will leave it out of the equation.

Let’s begin!

The Experiment

I will first create a database and then a single table.

 
USE tempdb
GO
IF db_id('DBFileGrowthDemo') IS NOT NULL
	BEGIN
		ALTER DATABASE DBFileGrowthDemo SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
		DROP DATABASE DBFileGrowthDemo
	END
GO
CREATE DATABASE DBFileGrowthDemo
GO
USE DBFileGrowthDemo
GO
CREATE TABLE SizeTest (Name NVARCHAR(4000))
GO

Note that the table has only a single column.  The column was defined as a NVARCHAR(4000) simply because I wanted to be able carefully calculate growth.  If I insert a rows where the Name is always 4,000 bytes characters, it is guaranteed that only one row will exist on a single page.  Remember that a single page is 8,192 bytes in size.

After the database is created, let’s take a look at the growth settings. This will help us to confirm our hypothesis.  We can see below that the data auto-growth is set for 1MB.

dbfilegrowthdemo_1

You can also see that the data file currently has a size of 2,240KB as shown below.

dbfilegrowthdemo_2

Assuming that we want the data file to grow 10 (ten) times at 1,024KB (1MB) intervals, or 10MB in total.  1MB is equal to 1,024KB so  10 x 1,024KB = 10,240KB.  This is also equivalent to 10,485,760 bytes, i.e. 10 x 1024 x 1024.

If the original size is 2,240KB (as shown above) the math would dictate that growing by 10MB our final data file size would be 10,240KB + 2,240KB = 12,480KB.

If I know that I want to grow by 10,240KB and a single page is 8,192B, then I would need to insert 10,485,760B/8192B = 1,280 rows total.   Note that in order to make the math easier I used bytes on both sides of the equation.

Let’s try it and see if the math is correct.

USE DBFileGrowthDemo
GO
SELECT GETDATE()
GO
INSERT SizeTest (name)
    SELECT REPLICATE('a',4000)
GO 1280

Notice that the current date & time will be returned.  This will be used later when determining how many times did the database actually grow.  Now that the rows have been inserted, what is the resulting database size?

dbfilegrowthdemo_3

It is shown above that the math is correct.   1,280 rows were inserted into the table and it grew the database out by 10MB.

Results

What about the original hypothesis?  Did the database indeed grow 10 times?  We can find this out by using the default trace, assuming that it has not yet been turned off.

Note that using the default trace is just one option.  We could also use Extended Events. I do not, however,  believe that tracking database growth is in the default system_health extended events session.  You would need to configure your own extended events session to track database growth.

DECLARE @file VARCHAR(1000)

SELECT @file = path
FROM sys.traces
WHERE is_default = 1

SELECT Databaseid, tg.EventClass, DatabaseName, FileName, LoginName,StartTime, EndTime,
    CASE
          WHEN mf.is_percent_growth = 1 then mf.size*(growth*.01)
          ELSE CAST(growth AS BIGINT)*8192/1024/1024
    END as 'Growth Rate(MB)'
FROM sys.fn_trace_gettable(@file,1) tg
       inner join sys.trace_events te on tg.eventclass = te.trace_event_id
       inner join sys.trace_categories tc on te.category_id = tc.category_id
       inner join sys.master_files mf on tg.databaseid = mf.database_id and tg.filename = mf.name
WHERE te.name in ('Data File Auto Grow')
	and FileName = 'DBFileGrowthDemo'
	and StartTime > '2015-05-09 16:50:50.240'  -- this is the time stamp from when the rows were first inserted.  
GO

What did this return?

dbfilegrowthdemo_4

10 rows!!  This means that the database did grow in 10 – 1MB increments just as we expected!  You can find the above script that I used here.

The hypothesis is confirmed!

Summary

While this experiment was fun to put together, it also show cases how important database growth management is.  In this case, the results shown here is just a small example of what really happens.  What if you had a large database that had many transactions running through it triggering many growth events? If the growth settings are not managed properly there could be unwarranted overhead being placed on the server.

Take a look at the databases within your servers.  The growth settings just might surprise you.

Posted in Administration, Internals | Tagged | 4 Comments

Free Your Mind. Release Your Inner Developer

mcescher_globeI have always liked the drawings done by MC Escher.  Looking at his drawings has always forced me to look at things from a different perspective. SQL Server, like his drawings, always gives room for many different perspectives. This is one of mine.

About a year ago,  I was able to present at SQL Saturday #307 in Iowa City, IA.   I gave a session titled “10 Things Every DBA Should Know!”.  The content included such things ranging from “SELECT *” to “AUTOSHRINK” to “Set Based Programming vs RBAR” among others.

After presenting, I looked at my feedback. Something I look forward to post each talk I give. Feedback is important to me as a speaker because I cannot improve if I do not know what is broken.  One comment in particular caught my attention.  It read something similar to this:

“Do more DBA level stuff, and not Dev related stuff”

This to me was great feedback.  I actually made a note to speak to it and explain why I include Dev stuff for future presentations on this particular topic.

While I can completely understand this point of view, I believe that understanding the development aspects of SQL Server are essential in being a good database administrator.  If understanding development related items did not matter, then we wouldn’t care about the quality of code that is developed nor the affects that quality has on SQL Server.

For example, if you don’t pay attention to development related aspects of SQL Server and you ran across this query: “SELECT * FROM dbo.Foo” would you be able to recognize that might be a bad idea?  Of course, it depends on your environment, but in general, this is a bad pattern to have in your databases. Understanding the “whys” are important.

Granted, there might be times where this query would make sense and possibly warranted, but if your table is 326 columns wide (note: that’s not a goal & and they do exist) it’s probably not a good idea.   The query is going to cause increased overhead in a number of areas, such as network bandwidth, disk I/O, etc.

Another example would be scalar functions.  I see a lot of these in my current environment.  Scalar functions can play havoc with the query optimizer.  When this havoc presents itself, usually your SQL Server will reflect accordingly and you’ll see degradation in performance.  If I did not care about the development aspects, I would not be able to recognize the bad pattern and adjust accordingly.

There are probably countless examples that I could use to further drive home the point that as a DBA you should know as much about your SQL Servers as possible.  This includes not only the administration concepts and methods but also the developmental ones.

If you understand both sides of the picture, you will be a much better DBA because of it.

 

Posted in Philosophical | Tagged | Leave a comment

SQL Saturday 392 – Atlanta, GA

SQLSAT392_webHave you been to Atlanta, Georgia ? Nope? Well it’s time you plan that trip and come see me speak May 16th, there at SQL Saturday #392.  This will be my first time presenting at their SQL Saturday and I’m really looking forward this great event. According to what I am seeing on Twitter, they have already hit 600+ attendees and we are still a couple weeks out.  This particular event is probably one of the largest events in the country (world?) and shouldn’t be missed.

If you like free training, hanging out with other geeks and if you have nothing better to do on Saturday, May 16th, 2015, come on out and get your learn on.  The schedule for this event is very impressive and there will be a TON of things to learn.  Come, join us and learn something.

Only You Can Prevent Database Fires!

firefightingDo you find yourself constantly putting out “fires”? Can’t take anymore heat in the data center? We as DBAs can easily become burnt out with dealing with these daily blazes. Wouldn’t it be grand to learn how to prevent these fires from scorching us to begin with? In this session, we will jump into things you can start implementing as “fire prevention” to stop the “fire fighting”.  We will talk about several things that you can take back to your shop and put into action easily ranging from code tuning, backup maintenance, monitoring and performance tuning!

I’m one of the last sessions of the day so I promise to try to be on-time so that we can get you out the door and to any after parties that may or may not be occurring.  I do reserve the right to go over when delivering awesome content.

Free Training

Can you really pass up on free training?  So if you’re still on the fence on attending, get off the fence, register, come on out and learn something.  Meet some new people.  Say hello to the speakers (we don’t bite).  Grab some swag and a fantastic lunch.   Register now!!

Get your learn on!  Expand your horizons and break out of the ordinary.  You know you want to.

Why are you still here?  Go.  Register.  Do it now! See you in Atlanta!!

Posted in Presenting, SQL Saturday | Tagged , | Leave a comment

Quick Script: Finding Database Owners & Security Admins

http://www.flickr.com/photos/mag3737/1420554128/

Security Is Everybody’s Responsibility

Recently, while diagnosing a permissions issue on a Production database, I realized that I didn’t really have any idea who had elevated permissions to any of databases on the instance. Particularly, I was interested in who the members of two specific database roles were, if any; database owner and security admin.

Database Roles

Let’s start with the Database Owner (db_owner). This role, basically, allows the user to do pretty much whatever they want to the database itself as well as all of the contents within it. Members of this role could drop tables, read data, add views, delete data or even drop the entire database to name just a few things. It is a definite security concern you should be aware of and make use of minimally.

Security Admin (db_securityadmin) is another database role that you should be cautious of.  Members of this role have the ability to elevate themselves or others to the database owner role.  Given that the database owner can do essentially anything to the database, you definitely want to verify who is listed as a Security Admin.

Use the below script to discover which accounts have these roles granted to it.

DISCLAIMER:  While I’ve been able to run it across multiple servers, over several hundred databases without an issue, your mileage will vary.  Please test accordingly.   NEVER run code that you find on the internet against your Production systems without testing first.  You have been warned!

/***************************************************************
-- Author: John Morehouse
-- Date: April 2015
-- T: @SQLRUS
-- E: john@jmorehouse.com
-- B: http://sqlrus.com

--THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--IN OTHER WORDS: USE AT YOUR OWN RISK.

--AUTHOR ASSUMES ZERO LIABILITY OR RESPONSIBILITY.

--You may alter this code for your own purposes.
--You may republish altered code as long as you give due credit.
***************************************************************/
DECLARE @dbs AS TABLE ( name sysname, processed BIT)
DECLARE @x INT = 1
DECLARE @sql VARCHAR (2000)
DECLARE @dbName VARCHAR (50)

IF object_id ('tempdb..#results') IS NOT NULL
        BEGIN
               DROP TABLE #results
        END

CREATE TABLE #results ( dbName sysname
                        , principalName VARCHAR (100)
                        , principalType VARCHAR (100)
                        , RoleMembership varchar(100)
                        , defaultSchema VARCHAR (100)
                        , principalSID varbinary(85)
                       )

INSERT INTO @dbs ( name, processed )
        SELECT name, 0 FROM sys.databases
               WHERE database_id > 6
                      AND [state] = 0 --online

WHILE @x <= (SELECT COUNT( 1) FROM @dbs WHERE processed = 0 )
        BEGIN
               SELECT TOP 1 @dbName = name FROM @dbs WHERE processed = 0
               SET @sql =
              'INSERT #results (dbName,principalName, principalType, RoleMembership, defaultSchema, principalSID)
                     SELECT '''+ @dbname +''',dp.name, dp.type_desc, dpr.name,dp.default_schema_name,dp.sid
                     FROM [' + @dbName + '].sys.database_role_members drm
                           INNER JOIN [' + @dbName + '].sys.database_principals dp on drm.member_principal_id = dp.principal_id
                           INNER JOIN [' + @dbName + '].sys.database_principals dpr on drm.role_principal_id = dpr.principal_id
                     WHERE dpr.name in (''db_owner'', ''db_securityadmin'')
                           AND dp.name <> ''dbo'''
        BEGIN TRY
             EXEC (@sql )
        END TRY
        BEGIN CATCH
               SELECT ERROR_LINE () AS 'Error Line'
                      , ERROR_MESSAGE () AS 'Error Message'
                      , ERROR_NUMBER () AS 'Error Number'
                      , @dbName AS 'Database'
        END CATCH

        UPDATE @dbs
        SET processed = 1
        WHERE name = @dbName
END
GO
SELECT * FROM #results
GO

Using this script in conjunction with our Central Management Server I was able to run it on all Production servers.  It quickly allowed me to see where I might have a security concern in each database on each server.

Once I was able to determine what permissions needed to be altered, we could address them accordingly.  Identifying these and making the appropriate changes helped to make our production environment that more secure.

Enjoy!

Posted in T-SQL | Tagged | Leave a comment

2015 Program Committee

speakerachievementLast year, I was very much honored to be selected to deliver a pre-con with my friend brother Chris Shaw (B|T).   I was both excited and nervous at the same time. Regrettably, there were some rough moments in the days that followed the selection announcements, when people found out that their abstract wasn’t selected.   Comments were said, tweets were tweeted and blog posts were written.   Regardless of the kerfuffle that was on the blogsphere,  the experience was amazing and I’m glad that I had the opportunity.

Be Part of The Process

This year I volunteered to be on the Program Committee for the 2015 PASS Summit.  I thought that being a part of the process would help give me better insight on how things really work.  It also gives me a chance to give my feedback and help to improve the process.

I wasn’t sure what to expect, therefore, I told the committee to put me wherever they thought best.   There are a number of options for the committee members to volunteer for, such as review abstracts, speakers, slide decks, or volunteer for the event itself. It really didn’t matter to me which I was on, as long as I was able to help.

They ended up placing me onto an abstract review committee where I have been tasked, along with several others, to review 200+ abstracts for the BI Platform: Architecture, Development & Administration track.  They gave us a deadline of getting all of these reviewed by the end of April.   I like that fact that given that we have multiple people on the team, each abstract is reviewed by several people which allows for a fair and decent review process.

As the review process was spinning up, there was a lot of shifting of people in and out of review teams.  For fairness, if you submitted an abstract for the Summit and are on the Program Committee, you are not allowed to be on the review team for the track you submitted for.  So many members, like me, were moved around.  The Program Committee is very keen on keeping the process as clean as possible.  To me, I think that there is some room for improvement on how these shifts are communicated.  Since I got involved to help better the process, I sent feedback up to the appropriate people.

Remember, We Are family

Last year, like in years past, I saw a number of comments made in various media outlets about who was or wasn’t selected to speak.  Surprisingly, in reading these comments, I noticed that they came off as potentially very hurtful and damaging to those that were selected.   Given that we tout proudly of our SQL Family, I was taken aback.  This year, I ask that our family please take a moment to think about how your message might be conveyed before communicating it.

It has become very clear to me thru this Program Committee process that  just because you are a MVP, MCM, MD, DDS, MSCE, MCSA, MCP, XYZ or your name is “John Doe” it does not guarantee a slot to speak.  The PASS Summit is a community event and as such all community speakers should be treated identical.

This process also ensures that no one can “own” a particular subject.  If someone else wishes to do a session on “ABC” and you are known as the subject matter expert on “ABC”, tough.  Someone else other than you can present on the same topic and/or concepts.  I fully agree that while they can speak on the same subject this does not give them the right to plagiarize content from you or from anybody else.

The Program Committee is striving very hard to make the process as fair and balanced as possible.

You Didn’t Make the Cut

If you submitted for the 2015 PASS Summit and your abstract(s) were not accepted, here’s how I would handle this:

  1. Congratulate those that did make it.  They worked hard on their abstracts and it showed.  Take the higher road.
  2. Thank the Program Committee, the review teams and PASS for doing the best that they could.  In my opinion, in regards of abstract reviews, the process is pretty fair and equitable.
  3. Ask for the comments from your abstracts.  Last year, the Program Committee would send you the comments from the abstract review process.  I am hoping that they will be doing this again for 2015.
  4. If you have feedback, offer it in a constructive manner to the Program Committee.  The Program Committee cannot fix things if they don’t know about it.  I would suggest doing this privately at first and then publicly if you think that it’s warranted.  Starting a flaming war on a blog post or Twitter is NOT the way to handle it.
  5. GunnyHighwayAs Gunny Highway says, “You improvise. You adapt. You overcome”.   Re-work your abstracts, get another set of eyes on it, and submit it again for 2016.  If it’s a session that you’ve never presented before, get it out to your local user group or a SQL Saturday.  Those are great avenues to test the waters for a new presentation. If you want someone to review it, feel free to email it to me at john at jmorehouse dot com.  I’d be more than happy to review it.
  6. Let it go.  Life is way too short to dwell on this.   There will be other days and other conferences.  Keep at it.

Summary

I, myself, submitted three abstracts. I am going into to this knowing I may or may not get selected. If it turns out that I am, it will be grand and I’ll do my best for the community.  I know there are some really fantastic speakers out there that also submitted and it will not be the end of the world if I am not chosen.  Tomorrow will come and the sun will rise.

If and when a rejection does come to PASS (pun intended), I’ll publicly offer up the comments from the review committee on my blog.  Then I will work on improving because whether or not I’m selected does NOT change my passion for teaching and giving back.

I, like you, trust the process to be fair and honest.  I put my faith into the reviewers to do the best that they can.

You should do the same.

Posted in Community, Philosophical | Tagged , , | 2 Comments