SQL Server 2008 R2 SP3 Install Errors

Whenever it is time to install a service pack every DBA silently hopes that nothing will go wrong.  I would surmise that, in most cases, installing the service pack is a non-event.  No flashing lights, no fireworks, no applause and no major hiccups.  This post is not about one of those times.

The Install

Several weeks ago I attempted to install SQL Server 2008 R2 Service Pack 3 on one of our production instances.   The server was already at SP2 so it was to be a routine install. However, as the installer launched, this error appeared:

sp3setuperror_1

The issue stems from missing files in the installer cache.  When applications, including SQL Server, are installed certain files are placed within the installer cache.  These files are subsequently used when installing service packs and cumulative updates.  For some unknown reason b40c768.msi was missing.

My initial search lead me to a Microsoft Support page title “How to restore the missing Windows Installer cache files and resolve problems that occur during a SQL Server update“.  Perfect.   The blog post has a lot of detail in it and explains the issue fairly well.  It suggests two methods to help resolve the issue, one being a script and the other manual intervention.   I opted for the former, the script.

The Script

The instructions are straight forward.

  1. Download the script to a folder on the affected server, IE: c:\temp
  2. Open an elevated command prompt
  3. Navigate to the folder where the script is located
  4. Execute this command: Cscript FindSQLInstalls.vbs %computername%_sql_install_details.txt

The script will collect information about what packages may or may not be missing from the installer cache.  Note: that the output of the script will put a text file in the same folder as the script itself.  It will also overwrite any files that might already be present.  If you wish to run this multiple times, I would suggest changing the file names accordingly.

The output of the script looks like this:

sp3_script_output

Click to enlarge image

You will see that in the above example, it’s shouting at me stating that the MSI file doesn’t exist in the path D:\x64\set\sql_engine_core_inst_msi\.  This is because the install media isn’t located within the CD-ROM drive.  This was expected.

You will further notice that it says that the installer cache file, b40c756.msi was located and no further actions are needed.  If this particular cache file was missing the output would notify me.  I would utilize the command it provides to copy the missing msi file from the install media to the location it specifies.

Now that I knew where the missing file might be located, I set out to go find it.  In this particular case, I located the file on the Service Pack 2 install media.  If you weren’t aware, those service packs (and cumulative updates) are self-extracting files.  You can manually extract the files to a location of your choice so that you can locate the file you need.

Once I was able to locate the missing MSI file from the extract contents, I was able to successfully copy it utilizing the command that is provided in the output file.

At this point, it was rinse and repeat.  I would follow the same steps for each file that was missing from the installer cache.  All in all, there were 3 or 4 files that were missing.  Once I had all of the files replaced, I re-ran the SP3 installer file.

Then this happened.

sp3_error

Looks like we have another issue.  This error was much more cryptic and not very useful.  So of course, I went back to Google and did some more research.  In addition to researching, I also checked the event view on the server.  Nothing of interest was there.   I also thought to check the bootstrap log.   Here I found an interesting error:

sp3_error_2

Click to enlarge image.

You will note that the error happened right after it started to look at sql_as_CPU64, or b40c73a.msi (two lines above the censored section). I thought that I had mis-copied this particular file.  I tried to copy it over again.  That did not solve the issue.  I still got the same error.

I eventually turned to a forum I frequent, Ask SQL Server Central.   You can read the question that I posted here.  Unfortunately, I did not get any responses so that meant that I had to continue to try to fix it.  I really did not want to call Microsoft Support but I would if I had to.

TL;DR

The answer lies within the bootstrap log.  The error isn’t referring to the line above it as I had thought.  The “sql_as_CPU64.msi” file was processed just fine.  The error was occurring on the file AFTER that one.   However, you will notice that the file it is trying to consume is not listed. Using the bootstrap log, I was able to comb back through it looking for all of the missing installer cache files.  I finally found one that wasn’t accounted for within the log file, so that had to be the one.

Turns out, when I copied the MSI file from the source media I mistaken copied an MSP file, which is distinctly different.  I found the correct file and copied it back over to the server.

A relaunch of the installer lead to…..Success!

Success

Resources

Here are some additional resources that I found.  These are not listed in any particular order.

Lessons Learned

I learned a number of lessons throughout this process.

  1. Pay attention to the details.  There is a distinct difference from a MSI and MSP file.  The former is an installer file.  The latter is a MSI patch file.
  2. Keep note of things tried.  I found that this issue was a combination of several different things and I should have kept better documentation on each one.  I found that I would repeat steps that I had already tried because I did not document them well enough.
  3. The internet is your friend.
  4. Do not be shy to use the resources you have available to you, IE: forums.  They are there to help you.
Posted in Administration | Tagged | Leave a comment

Why not jump in and help?

help!If you know me, you know it is in my nature to help.  Whether it’s helping a friend move, helping with a technical issue or offering a shoulder to lean on, I am there.  I’m usually one of those “go to” kind of guys when you need that helping hand.  I find helping, in any context, to be very gratifying.

Helping Others

As a speaker, I often try to get people to start speaking.   It can be rewarding, both personally and professionally.   On more than one occasion, I have heard the excuse to not speak as “I have nothing to say that hasn’t already been said”.  I will often disagree and argue that your story, regardless of the topic, is worth being told.  Whether this is through a presentation or a blog post, get out there and let it be heard.

I think that this excuse also holds true for helping others in the SQL community.  I think sometimes people do not want to help out because of the following reasons

  1. They don’t think they are qualified
  2. They do not want to get involved with some else’s problem
  3. They might be wrong

Let’s look at all three of those.

Not Qualified to answer.  Face it, we are not heart surgeons and we have no place being in the operating room helping perform surgery.  That’s not what we, as IT professionals, do on a daily basis.  You are, however, more than qualified to help out other IT professionals since they are doing the same type of work.  If I had to guess, even heart surgeons seek out advice of their peers to solve problems and make themselves better surgeons.  Take ownership of your skills and offer to at least listen to others that might be having problems.  If you have a thought about their particular problem, offer it up.  Let them decide whether or not to take your advice.

Getting involved with someone’s problem.  This can be tricky one to overcome, especially over the internet.  Yes, helping out means that you have to get involved and potentially get your hands dirty.  Knowing when to jump in and help can be tough and sometimes there are even individuals that are difficult to help. You will likely run across a few here and there.  However, there isn’t anything wrong with saying to them, “I can’t help you any further, reach out to so-and-so consulting and they can help”.   This allows them to find other avenues of help.  Know when you have reached your limit and when you do, communicate that.

Being wrong.  I’ll be honest.  I’m wrong from time to time.  I do not shy away from it rather, I own up to it.  If I’m wrong in a public forum, I will usually say “I stand corrected and thanks to Billy Bob for setting me straight”.  There is absolutely nothing wrong in being wrong.  If you make a mistake, own up to it, learn from it, and move on.  The world will not stop if you are wrong.  The sun will rise in the morning and new problems will appear.

Being the Expert

I rarely claim that I am an expert in anything.  I am always striving to learn new things in all facets of life.  I am a firm believer that there is always more than one way to skin a cat.  I find that this is especially true when it comes to SQL Server.

When you are helping others you must keep this in mind.  They, or others that might be helping, might not know of a preferred way to accomplish something.  I believe that it is our job to educate not only the individual asking for help but any others that might be involved as well.

We should also strive to do this in a professional manner.  We should not call them out, regardless of the medium, for their lack of knowledge or for the advice that they give.  We should rather applaud them for trying to help others.   We certainly can tell them that they are wrong.  However, we must provide the reasons behind their incorrect advice so that they too will learn from the experience.

Summary

Whether you are giving a presentation or helping others on some technical forum, helping can be very rewarding.  I assure you that you have the knowledge, skills, and drive to help others.

If you’re the expert, take a moment to remember that we are all here for the same reason.

Note: No cats were harmed in writing this blog post

Posted in Philosophical | Tagged | Leave a comment

Database Growth

Last week I got into a conversation with a colleague about how SQL Server grows the database.  We were both fairly certain that the database would grow in specific sized increments.  We further assumed that the increment size would match the growth setting on the database.  However, we wanted to see it for ourselves.  Thus an experiment is born!

Hypothesis

When a database is forced to grow within SQL Server, it will do so in predefined increments rather than one large growth.  For example, if a transaction requires 100MB of additional disk space and the growth setting is equal to 1MB, the database would grow 100 times at 1MB each time rather than 100MB for a single time.

To test out the hypothesis we will do the following:

  1. Create a dummy database
  2. Evaluate the current grow setting of the database
  3. Evaluate the current size of the database
  4. Generate a transaction that will force the database to grow
  5. Evaluate the number of growth events
  6. Evaluate the final size of the database
  7. Compare the results

For simplicity, we will only be examining the data file of the database.  The log file will have the same behavior however to make it easy we will leave it out of the equation.

Let’s begin!

The Experiment

I will first create a database and then a single table.

 
USE tempdb
GO
IF db_id('DBFileGrowthDemo') IS NOT NULL
	BEGIN
		ALTER DATABASE DBFileGrowthDemo SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
		DROP DATABASE DBFileGrowthDemo
	END
GO
CREATE DATABASE DBFileGrowthDemo
GO
USE DBFileGrowthDemo
GO
CREATE TABLE SizeTest (Name NVARCHAR(4000))
GO

Note that the table has only a single column.  The column was defined as a NVARCHAR(4000) simply because I wanted to be able carefully calculate growth.  If I insert a rows where the Name is always 4,000 bytes characters, it is guaranteed that only one row will exist on a single page.  Remember that a single page is 8,192 bytes in size.

After the database is created, let’s take a look at the growth settings. This will help us to confirm our hypothesis.  We can see below that the data auto-growth is set for 1MB.

dbfilegrowthdemo_1

You can also see that the data file currently has a size of 2,240KB as shown below.

dbfilegrowthdemo_2

Assuming that we want the data file to grow 10 (ten) times at 1,024KB (1MB) intervals, or 10MB in total.  1MB is equal to 1,024KB so  10 x 1,024KB = 10,240KB.  This is also equivalent to 10,485,760 bytes, i.e. 10 x 1024 x 1024.

If the original size is 2,240KB (as shown above) the math would dictate that growing by 10MB our final data file size would be 10,240KB + 2,240KB = 12,480KB.

If I know that I want to grow by 10,240KB and a single page is 8,192B, then I would need to insert 10,485,760B/8192B = 1,280 rows total.   Note that in order to make the math easier I used bytes on both sides of the equation.

Let’s try it and see if the math is correct.

USE DBFileGrowthDemo
GO
SELECT GETDATE()
GO
INSERT SizeTest (name)
    SELECT REPLICATE('a',4000)
GO 1280

Notice that the current date & time will be returned.  This will be used later when determining how many times did the database actually grow.  Now that the rows have been inserted, what is the resulting database size?

dbfilegrowthdemo_3

It is shown above that the math is correct.   1,280 rows were inserted into the table and it grew the database out by 10MB.

Results

What about the original hypothesis?  Did the database indeed grow 10 times?  We can find this out by using the default trace, assuming that it has not yet been turned off.

Note that using the default trace is just one option.  We could also use Extended Events. I do not, however,  believe that tracking database growth is in the default system_health extended events session.  You would need to configure your own extended events session to track database growth.

DECLARE @file VARCHAR(1000)

SELECT @file = path
FROM sys.traces
WHERE is_default = 1

SELECT Databaseid, tg.EventClass, DatabaseName, FileName, LoginName,StartTime, EndTime,
    CASE
          WHEN mf.is_percent_growth = 1 then mf.size*(growth*.01)
          ELSE CAST(growth AS BIGINT)*8192/1024/1024
    END as 'Growth Rate(MB)'
FROM sys.fn_trace_gettable(@file,1) tg
       inner join sys.trace_events te on tg.eventclass = te.trace_event_id
       inner join sys.trace_categories tc on te.category_id = tc.category_id
       inner join sys.master_files mf on tg.databaseid = mf.database_id and tg.filename = mf.name
WHERE te.name in ('Data File Auto Grow')
	and FileName = 'DBFileGrowthDemo'
	and StartTime > '2015-05-09 16:50:50.240'  -- this is the time stamp from when the rows were first inserted.  
GO

What did this return?

dbfilegrowthdemo_4

10 rows!!  This means that the database did grow in 10 – 1MB increments just as we expected!  You can find the above script that I used here.

The hypothesis is confirmed!

Summary

While this experiment was fun to put together, it also show cases how important database growth management is.  In this case, the results shown here is just a small example of what really happens.  What if you had a large database that had many transactions running through it triggering many growth events? If the growth settings are not managed properly there could be unwarranted overhead being placed on the server.

Take a look at the databases within your servers.  The growth settings just might surprise you.

Posted in Administration, Internals | Tagged | 2 Comments

Free Your Mind. Release Your Inner Developer

mcescher_globeI have always liked the drawings done by MC Escher.  Looking at his drawings has always forced me to look at things from a different perspective. SQL Server, like his drawings, always gives room for many different perspectives. This is one of mine.

About a year ago,  I was able to present at SQL Saturday #307 in Iowa City, IA.   I gave a session titled “10 Things Every DBA Should Know!”.  The content included such things ranging from “SELECT *” to “AUTOSHRINK” to “Set Based Programming vs RBAR” among others.

After presenting, I looked at my feedback. Something I look forward to post each talk I give. Feedback is important to me as a speaker because I cannot improve if I do not know what is broken.  One comment in particular caught my attention.  It read something similar to this:

“Do more DBA level stuff, and not Dev related stuff”

This to me was great feedback.  I actually made a note to speak to it and explain why I include Dev stuff for future presentations on this particular topic.

While I can completely understand this point of view, I believe that understanding the development aspects of SQL Server are essential in being a good database administrator.  If understanding development related items did not matter, then we wouldn’t care about the quality of code that is developed nor the affects that quality has on SQL Server.

For example, if you don’t pay attention to development related aspects of SQL Server and you ran across this query: “SELECT * FROM dbo.Foo” would you be able to recognize that might be a bad idea?  Of course, it depends on your environment, but in general, this is a bad pattern to have in your databases. Understanding the “whys” are important.

Granted, there might be times where this query would make sense and possibly warranted, but if your table is 326 columns wide (note: that’s not a goal & and they do exist) it’s probably not a good idea.   The query is going to cause increased overhead in a number of areas, such as network bandwidth, disk I/O, etc.

Another example would be scalar functions.  I see a lot of these in my current environment.  Scalar functions can play havoc with the query optimizer.  When this havoc presents itself, usually your SQL Server will reflect accordingly and you’ll see degradation in performance.  If I did not care about the development aspects, I would not be able to recognize the bad pattern and adjust accordingly.

There are probably countless examples that I could use to further drive home the point that as a DBA you should know as much about your SQL Servers as possible.  This includes not only the administration concepts and methods but also the developmental ones.

If you understand both sides of the picture, you will be a much better DBA because of it.

 

Posted in Philosophical | Tagged | Leave a comment

SQL Saturday 392 – Atlanta, GA

SQLSAT392_webHave you been to Atlanta, Georgia ? Nope? Well it’s time you plan that trip and come see me speak May 16th, there at SQL Saturday #392.  This will be my first time presenting at their SQL Saturday and I’m really looking forward this great event. According to what I am seeing on Twitter, they have already hit 600+ attendees and we are still a couple weeks out.  This particular event is probably one of the largest events in the country (world?) and shouldn’t be missed.

If you like free training, hanging out with other geeks and if you have nothing better to do on Saturday, May 16th, 2015, come on out and get your learn on.  The schedule for this event is very impressive and there will be a TON of things to learn.  Come, join us and learn something.

Only You Can Prevent Database Fires!

firefightingDo you find yourself constantly putting out “fires”? Can’t take anymore heat in the data center? We as DBAs can easily become burnt out with dealing with these daily blazes. Wouldn’t it be grand to learn how to prevent these fires from scorching us to begin with? In this session, we will jump into things you can start implementing as “fire prevention” to stop the “fire fighting”.  We will talk about several things that you can take back to your shop and put into action easily ranging from code tuning, backup maintenance, monitoring and performance tuning!

I’m one of the last sessions of the day so I promise to try to be on-time so that we can get you out the door and to any after parties that may or may not be occurring.  I do reserve the right to go over when delivering awesome content.

Free Training

Can you really pass up on free training?  So if you’re still on the fence on attending, get off the fence, register, come on out and learn something.  Meet some new people.  Say hello to the speakers (we don’t bite).  Grab some swag and a fantastic lunch.   Register now!!

Get your learn on!  Expand your horizons and break out of the ordinary.  You know you want to.

Why are you still here?  Go.  Register.  Do it now! See you in Atlanta!!

Posted in Presenting, SQL Saturday | Tagged , | Leave a comment

Quick Script: Finding Database Owners & Security Admins

http://www.flickr.com/photos/mag3737/1420554128/

Security Is Everybody’s Responsibility

Recently, while diagnosing a permissions issue on a Production database, I realized that I didn’t really have any idea who had elevated permissions to any of databases on the instance. Particularly, I was interested in who the members of two specific database roles were, if any; database owner and security admin.

Database Roles

Let’s start with the Database Owner (db_owner). This role, basically, allows the user to do pretty much whatever they want to the database itself as well as all of the contents within it. Members of this role could drop tables, read data, add views, delete data or even drop the entire database to name just a few things. It is a definite security concern you should be aware of and make use of minimally.

Security Admin (db_securityadmin) is another database role that you should be cautious of.  Members of this role have the ability to elevate themselves or others to the database owner role.  Given that the database owner can do essentially anything to the database, you definitely want to verify who is listed as a Security Admin.

Use the below script to discover which accounts have these roles granted to it.

DISCLAIMER:  While I’ve been able to run it across multiple servers, over several hundred databases without an issue, your mileage will vary.  Please test accordingly.   NEVER run code that you find on the internet against your Production systems without testing first.  You have been warned!

/***************************************************************
-- Author: John Morehouse
-- Date: April 2015
-- T: @SQLRUS
-- E: john@jmorehouse.com
-- B: http://sqlrus.com

--THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--IN OTHER WORDS: USE AT YOUR OWN RISK.

--AUTHOR ASSUMES ZERO LIABILITY OR RESPONSIBILITY.

--You may alter this code for your own purposes.
--You may republish altered code as long as you give due credit.
***************************************************************/
DECLARE @dbs AS TABLE ( name sysname, processed BIT)
DECLARE @x INT = 1
DECLARE @sql VARCHAR (2000)
DECLARE @dbName VARCHAR (50)

IF object_id ('tempdb..#results') IS NOT NULL
        BEGIN
               DROP TABLE #results
        END

CREATE TABLE #results ( dbName sysname
                        , principalName VARCHAR (100)
                        , principalType VARCHAR (100)
                        , RoleMembership varchar(100)
                        , defaultSchema VARCHAR (100)
                        , principalSID varbinary(85)
                       )

INSERT INTO @dbs ( name, processed )
        SELECT name, 0 FROM sys.databases
               WHERE database_id > 6
                      AND [state] = 0 --online

WHILE @x <= (SELECT COUNT( 1) FROM @dbs WHERE processed = 0 )
        BEGIN
               SELECT TOP 1 @dbName = name FROM @dbs WHERE processed = 0
               SET @sql =
              'INSERT #results (dbName,principalName, principalType, RoleMembership, defaultSchema, principalSID)
                     SELECT '''+ @dbname +''',dp.name, dp.type_desc, dpr.name,dp.default_schema_name,dp.sid
                     FROM [' + @dbName + '].sys.database_role_members drm
                           INNER JOIN [' + @dbName + '].sys.database_principals dp on drm.member_principal_id = dp.principal_id
                           INNER JOIN [' + @dbName + '].sys.database_principals dpr on drm.role_principal_id = dpr.principal_id
                     WHERE dpr.name in (''db_owner'', ''db_securityadmin'')
                           AND dp.name <> ''dbo'''
        BEGIN TRY
             EXEC (@sql )
        END TRY
        BEGIN CATCH
               SELECT ERROR_LINE () AS 'Error Line'
                      , ERROR_MESSAGE () AS 'Error Message'
                      , ERROR_NUMBER () AS 'Error Number'
                      , @dbName AS 'Database'
        END CATCH

        UPDATE @dbs
        SET processed = 1
        WHERE name = @dbName
END
GO
SELECT * FROM #results
GO

Using this script in conjunction with our Central Management Server I was able to run it on all Production servers.  It quickly allowed me to see where I might have a security concern in each database on each server.

Once I was able to determine what permissions needed to be altered, we could address them accordingly.  Identifying these and making the appropriate changes helped to make our production environment that more secure.

Enjoy!

Posted in T-SQL | Tagged | Leave a comment

2015 Program Committee

speakerachievementLast year, I was very much honored to be selected to deliver a pre-con with my friend brother Chris Shaw (B|T).   I was both excited and nervous at the same time. Regrettably, there were some rough moments in the days that followed the selection announcements, when people found out that their abstract wasn’t selected.   Comments were said, tweets were tweeted and blog posts were written.   Regardless of the kerfuffle that was on the blogsphere,  the experience was amazing and I’m glad that I had the opportunity.

Be Part of The Process

This year I volunteered to be on the Program Committee for the 2015 PASS Summit.  I thought that being a part of the process would help give me better insight on how things really work.  It also gives me a chance to give my feedback and help to improve the process.

I wasn’t sure what to expect, therefore, I told the committee to put me wherever they thought best.   There are a number of options for the committee members to volunteer for, such as review abstracts, speakers, slide decks, or volunteer for the event itself. It really didn’t matter to me which I was on, as long as I was able to help.

They ended up placing me onto an abstract review committee where I have been tasked, along with several others, to review 200+ abstracts for the BI Platform: Architecture, Development & Administration track.  They gave us a deadline of getting all of these reviewed by the end of April.   I like that fact that given that we have multiple people on the team, each abstract is reviewed by several people which allows for a fair and decent review process.

As the review process was spinning up, there was a lot of shifting of people in and out of review teams.  For fairness, if you submitted an abstract for the Summit and are on the Program Committee, you are not allowed to be on the review team for the track you submitted for.  So many members, like me, were moved around.  The Program Committee is very keen on keeping the process as clean as possible.  To me, I think that there is some room for improvement on how these shifts are communicated.  Since I got involved to help better the process, I sent feedback up to the appropriate people.

Remember, We Are family

Last year, like in years past, I saw a number of comments made in various media outlets about who was or wasn’t selected to speak.  Surprisingly, in reading these comments, I noticed that they came off as potentially very hurtful and damaging to those that were selected.   Given that we tout proudly of our SQL Family, I was taken aback.  This year, I ask that our family please take a moment to think about how your message might be conveyed before communicating it.

It has become very clear to me thru this Program Committee process that  just because you are a MVP, MCM, MD, DDS, MSCE, MCSA, MCP, XYZ or your name is “John Doe” it does not guarantee a slot to speak.  The PASS Summit is a community event and as such all community speakers should be treated identical.

This process also ensures that no one can “own” a particular subject.  If someone else wishes to do a session on “ABC” and you are known as the subject matter expert on “ABC”, tough.  Someone else other than you can present on the same topic and/or concepts.  I fully agree that while they can speak on the same subject this does not give them the right to plagiarize content from you or from anybody else.

The Program Committee is striving very hard to make the process as fair and balanced as possible.

You Didn’t Make the Cut

If you submitted for the 2015 PASS Summit and your abstract(s) were not accepted, here’s how I would handle this:

  1. Congratulate those that did make it.  They worked hard on their abstracts and it showed.  Take the higher road.
  2. Thank the Program Committee, the review teams and PASS for doing the best that they could.  In my opinion, in regards of abstract reviews, the process is pretty fair and equitable.
  3. Ask for the comments from your abstracts.  Last year, the Program Committee would send you the comments from the abstract review process.  I am hoping that they will be doing this again for 2015.
  4. If you have feedback, offer it in a constructive manner to the Program Committee.  The Program Committee cannot fix things if they don’t know about it.  I would suggest doing this privately at first and then publicly if you think that it’s warranted.  Starting a flaming war on a blog post or Twitter is NOT the way to handle it.
  5. GunnyHighwayAs Gunny Highway says, “You improvise. You adapt. You overcome”.   Re-work your abstracts, get another set of eyes on it, and submit it again for 2016.  If it’s a session that you’ve never presented before, get it out to your local user group or a SQL Saturday.  Those are great avenues to test the waters for a new presentation. If you want someone to review it, feel free to email it to me at john at jmorehouse dot com.  I’d be more than happy to review it.
  6. Let it go.  Life is way too short to dwell on this.   There will be other days and other conferences.  Keep at it.

Summary

I, myself, submitted three abstracts. I am going into to this knowing I may or may not get selected. If it turns out that I am, it will be grand and I’ll do my best for the community.  I know there are some really fantastic speakers out there that also submitted and it will not be the end of the world if I am not chosen.  Tomorrow will come and the sun will rise.

If and when a rejection does come to PASS (pun intended), I’ll publicly offer up the comments from the review committee on my blog.  Then I will work on improving because whether or not I’m selected does NOT change my passion for teaching and giving back.

I, like you, trust the process to be fair and honest.  I put my faith into the reviewers to do the best that they can.

You should do the same.

Posted in Community, Philosophical | Tagged , , | 2 Comments

T-SQL Tuesday #65 – Failing Over with PowerShell

TSQL2sDay150x150

This month’s T-SQL Tuesday topic is being hosted by Mike Donnelly (B|T).  The topic of choice is a fantastic one, its simply just to teach something new.

So let’s do this.

Our current windows patching process is manual.  On a given weekend every month, we log into the inactive node of the production cluster, install the windows patches and then fail over the cluster.  Making the newly patched server the active node of the cluster.

With any type of manual process, I always look for ways to automated it and make life easier.  In this case I not only wanted to automate,  but also take this off my plate. So I spoke with the Server team, and they agreed to take over the task but only if we automated deployment of the patches . They also requested that first we ensure that the inactive node was the same server for every patching cycle.  That’s where Powershell came in, I used cmdlets  to perform the  needed actions on the cluster.

I envisioned the script working like this:

  1. Prior to patching, the script would execute.
  2. The script would determine the owner of an existing cluster resource.  The owner represents which server the resource is residing on.
  3. If the resource owner isn’t correct, fail over the cluster automatically.  This would put the resource on the correct node of the cluster.
  4. The now correct node is subsequently patched.
  5. The script is called again after patching to move the active node onto the newly patched server.

Sounds straight forward right? Let’s hope so.

A quick Google search delivered a number of cmdlets for Powershell to manage a Windows Cluster.   You can find them here.  After looking at the list, I figured that I needed to look at:

Get-ClusterGroup

Microsoft tell us that this cmdlet is used to “Get information about one or more clustered services or applications (resource groups) in a failover cluster.”  Specifically, we can use this cmdlet to obtain who currently owns a particular cluster resource such as SQL Server.

Here is the basic syntax:
Get-ClusterGroup [-InputObject ] [[-Name] ] [-Cluster ] []

The output of this command will tell me the resource name, the owner node and the state of the resource, whether or not the resource is online or offline.

Move-ClusterGroup

Microsoft tells us that this cmdlet is used to “Move a clustered service or application (a resource group) from one node to another in a failover cluster.”  This is exactly what I need.  I can use this cmdlet to “move” the entire group to another node.

Here is the basic syntax:
Move-ClusterGroup [-InputObject ] [[-Name] ] [[-Node] ] [-Cluster ] [-Wait ] []

Given that I’ll have the cluster information from the Get-ClusterGroup cmdlet, I can call the move-clustergroup accordingly and tell it which node to move the cluster onto.

Development

Before getting started, it’s worth noting that if your local machine doesn’t have the failover clustering tools already installed, you’ll need to get them installed so that you can access the failoverclusters powershell module.  You can find documentation on how to add a role to your machine here.  Your mileage may vary depending on what operating system you are using.

Now that I’ve gotten my machine setup correctly with the appropriate role, I started to test out the cmdlets that I had found.   First I had to determine how to acquire the cluster information.   For purposes of this post, I am going to reference a cluster named “Cluster1″ however the example screen shots will be censored due to security reasons.  I am also interested in the “SQL Server (MSSQLSERVER)” resource group, which is the group the SQL Server cluster resources belong to.  Given those two pieces of information, this is the resulting query:

Get-ClusterGroup -Cluster "Cluster1" -Name "SQL Server (MSSQLServer)" | fl *

get-clustergroup_1

The “|” means that we will pipe the output of the get-clustergroup command into the format-list (fl) cmdlet.   The asterick specifies to display all the properties of the get-clustergroup output.

We can then set the output of this cmdlet to a variable like this:

$obj = get-clustergroup -cluster "Cluster1" -name "SQL Server (MSSQLSERVER)

With the output of the cmdlet into the form of a variable, we can now interrogate the variable for the name of the owner of the cluster group.    We use an IF() statement within Powershell to do the checking.  If the name of the owner is what we want, we then use the move-clustergroup cmdlet to tell the cluster to move to another node.

Given that we only have 2 node clusters, this would fail it over to the other node.  If you had multiple nodes, you can specify which node to move to by including the “-node [nodename]” switch.

#************************************************************
# Purpose: Failover cluster to ensure inactive node is correct
# Author: MorehouseJ
# Created: 4/10/2015
#***********************************************************
import-module failoverclusters
 
$obj = get-clustergroup -cluster Cluster1 -name "Cluster Group"
 
#flip 
If ($obj.ownerNode.Name -ne “server1"){
    get-clustergroup -cluster Cluster1 | move-clustergroup
}

Peer Review

Once I had a basic script constructed, I thought that it would be best to get someone much more versed in Powershell to look things over.  So I turned to my friend Mike Fal (B|T) and asked his advice.

He had some excellent pointers for me.  Namely, if I were to use parameters the script could be easily reused.  Mike also pointed that out I was making two separate calls to the Get-Clustergroup cmdlet which is a waste of resources.  I can call it once, set it to a variable (I was already doing this), and then reuse that variable where appropriate.  Mike also suggested putting in some logging so that some level of tracking was going on.

Based on Mike’s excellent suggestions, this is what the final script looks like:

#************************************************************
# Purpose: Failover cluster to ensure inactive node is correct
# Author: MorehouseJ
# Created: 4/10/2015
#************************************************************
param([string] $cluster
    ,[string] $node
    ,[string] $resource)

$obj = get-clustergroup -cluster $cluster -name $resource
 
#flip 
If ($obj.ownerNode.Name -ne $node){
    Write-Warning "Incorrect node for '$resource', failing over to $node."
    $obj | move-clustergroup -Node $node
    }

Summary

Now we have a complete script that can be easily reused, will move the nodes as needed and help to reduce the amount of manual efforts in the patching process.   This to me demonstrates the power of PowerShell (no pun intended).  I can’t wait to introduce more of these types of thing to our team.

Posted in Administration, TSQL Tuesday | Tagged , , | 1 Comment

Detaching a Database – File Security Gotcha

I’ll admit it, there are times where detaching a database is the best way to get something accomplished, but it is my least favorite way of doing anything.  Using this method can leave yourself open to issues, such as security, if things go wrong.  I think that it’s an important aspect that people miss when detaching a database.

When you detach the database, SQL Server essentially looses all knowledge of it.  If SQL Server is no longer aware of the database, what SQL Server based scripts or tools can you use against that database?  The answer…  None.  Zip.  Nada.  Zero.  In other words, you can’t.  All of those handy DMV or DBCC scripts will not work.

If you find yourself needing to detach a database, one of the things you need to be aware of is how SQL Server adjusts the security of the physical database files.  

While this might not seem like a large issue, trust me it can be.  When it is detached the database file carries with it only the individual who detached the database. Therefore, in theory, only that individual has rights and can reattach it.  SQL Server does this on purpose to ensure the security of the database files.  

To show this, we will:

  1. Create a dummy database
  2. Look at the permissions on the physical files
  3. Detach the database
  4. Re-examine the permissions.

Let’s take a look!

CREATE DATABASE DetachDBDemo
ON PRIMARY
( NAME = N'LogFileDemo', FILENAME = N'C:\Data\DetachDBDemo.mdf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'LogFileDemo_log', FILENAME = N'C:\Data\DetachDBDemo_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO

Now that the database has been created, let’s take a look at those permissions.  Remember that we are talking about the file level permissions, so you’ll need to do this through Windows Explorer.

DetachDB_1

In looking at the MDF file, we can easily see that there are a couple of accounts that have permissions to this file.

  • Owner Rights
  • Administrators
  • MSSQLSERVER

Now let’s detach the database and see what happens.  You can do this through the GUI or through code.   If you want to use the GUI simply right click the database, select Tasks, and then Detach Database.   In the resulting dialog window, select to Drop Connections, click OK.

detachdb_2

detachdb_3

I prefer to do most things using T-SQL scripts, so here is the code to accomplish the same thing.

USE [master]
GO
ALTER DATABASE [DetachDBDemo] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
EXEC master.dbo.sp_detach_db @dbname = N'DetachDBDemo'
GO

Now that the database is detached, what happened with the permissions?detachdb_4

Now the only account that has access to the file is my local user account.  What happens if we attempt to reattach the database with another account? Let’s create another machine account, called DetachDBUserAD.  We will also make it a local administrator as well as add it to the sysadmin server role in SQL Server.

detachdb_5

Here you can see I’ve created the new AD account and I’ve placed it within the Administrators group.  This group should give it unlimited access to the server.

Let us try to attach the database using this new AD account and see what the results are.

I’ve logged into the server as DetachDBUserAD Windows account. Remember that this account is an administrator for both Windows and SQL Server.

CREATE DATABASE DetachDBDemo
	ON PRIMARY 
		(FILENAME = 'C:\Data\DetachDBDemo.MDF'),
		(FILENAME = 'C:\Data\DetachDBDemo.ldf')
	FOR ATTACH
GO

DetachDB_7

Even though the the Windows account is both a local administrator as well as a SQL Server administrator, it still does not have the appropriate permissions to attach the files.

As you can see this issue can be problematic and a nuisance.  There are three ways to get around this issue.

  1. Have the user who detached the database reattach the files.
  2. Have an administrator adjust the permissions on the files so that someone else can attach them.
  3. Do not detach in the first place.

It’s also worth noting that if you detach with a SQL Login (not a machine account) the permissions on the physical files do not get altered.  However, since using SQL logins is not a security best practice, I would not recommend doing this.

If you find yourself in a situation where you have to detach a database remember what happens with file permissions and please make sure that your backup/recovery strategy has been tested.

Enjoy!

Posted in Administration, Internals | Tagged , | 3 Comments

Procedure Creation – Correct Syntax, Wrong Database

Recently, on a Friday evening, we released an update to one of our core applications.   As expected, it went smoothly and no issues were noted.  On the following Monday morning, however, the story took a different turn when end users started reporting issues.  One of the issues encountered was related to a new stored procedure.

In looking at the procedure, we found there were references to a database that existed within the QA environment but did not in Production.   It was assumed that when the procedure was created, it would throw an error because the database referenced didn’t exist.

query_optimizerUnfortunately, it does not work like this.

When the procedure is created, only the syntax of the code is parsed for validity.  As long as the syntax is correct the procedure will be created without an issue.  This can cause an issue because invalid references can slip by.

The real issue will manifest when the procedure is actually called.  The syntax will be checked again by the Parser and then sent to the Algebrizer (or Binding).  The invalid reference doesn’t physically exist.  This causes the Algebrizer the inability to bind and throws an error.

This is also true with altering a procedure that already exists.

We can prove this fairly easily just by creating and/or modifying a procedure with a reference to an object that does not exist.

-- We all have a Scratch Database right?
USE Scratch
GO
-- Create a procedure with invalid reference
CREATE PROCEDURE dbo.ProcCreateDemo
AS
BEGIN
   SET NOCOUNT ON;
   SELECT * FROM DBThatDoesntExist.sys.objects
END
GO
-- Does it exist?
SELECT name, object_id, type, type_desc, create_date from sys.procedures
SELECT OBJECT_DEFINITION(object_id) from sys.procedures

ProcedureCreation_1

We can see that the procedure, even with the invalid reference, was created without issue.  If we look at the object definition of the procedure, we clearly see the invalid reference.

When the procedure is executed the error will manifest.

ProcedureCreation_2

What if we alter an existing procedure?

In this example, I’ll adjust the procedure to take out the invalid reference.

ProcedureCreation_3Then I’ll alter it again to put the invalid reference back in.

ProcedureCreation_4

Note that the procedure was altered successfully both times.  Even though the second ALTER has an invalid reference, the syntax of the command is correct thus the procedure is altered successfully.

We are currently in the process of making adjustments to our release process to ensure that issues like this are caught in the lower environments rather in Production.

References

If you want to learn more about the query optimizer, I would suggest reading this post on how the optimizer works by .

I would also suggest SQL Server Guru, Paul White, as he has a deep dive blog series into the query optimizer.  You can find the first post in his series here.

Enjoy!

Posted in Administration, Internals, T-SQL | Tagged , | Leave a comment