I’m Speaking! Quad Cities PASS User Group

I always have a special place in my heart for the SQL Server user groups of the mid-west.  I ran the Omaha chapter for several years so there’s a special fondness for groups in the surrounding areas of Nebraska that provide training & networking for their communities.  Don’t get me wrong, if a user group anywhere wants me to speak, I’m more than happy to do so if my schedule allows.  If a group from Nebraska or Iowa asks, I’ll make that happen.

With that said, when the Quad Cities PASS User group asked me to speak, I absolutely said yes.  So tonight,  Thursday August 9th, 2017, at 6/7PM (CST/EST) I’ll be doing a remote presentation for the group.  It’s one of my favorite sessions to present and I’m really looking forward to it!

Only You Can Prevent Database Fires!

firefightingDo you find yourself constantly putting out “fires”? Can’t take anymore heat in the data center? We as DBAs can easily become burnt out with dealing with these daily blazes. Wouldn’t it be grand to learn how to prevent these fires from scorching us to begin with? In this session, we will jump into things you can start implementing as “fire prevention” to stop the “fire fighting”.  We will talk about several things that you can take back to your shop and put into action easily ranging from code tuning, backup maintenance, monitoring and performance tuning!

If you are in the area, please make sure to RSVP so that the organizers have an accurate head count for food!!

See you tonight virtually!

Posted in Presenting | Tagged , | Leave a comment

Start Your Engine’s with Startup Procedures

After reviewing some recent audit results for a couple of our SQL Servers, I noted the audit checked for start up procedures.  This caught my attention as I was not fully aware of what those were.  Naturally I decided to investigate further.

Startup procedures automatically execute whenever SQL Server is started.  Where would you use this?  One example is if you had an application that required the use of a global temporary table, you could use a startup procedure to create it.  This would allow the table to be immediately accessible to any application that requires it.

Another possibility is that you could “warm up” the cache so that data is already cached when an application needs it.  This would help prevent that initial I/O hit the first time the query is called, thus potentially improving initial performance.

Gotchas

Just like with anything within SQL server, there are some things that you need to be aware of:

  • Each startup procedure consumes a worker thread while executing
  • The stored procedure must live in the Master database
  • Only system administrators can set a procedure as a startup procedure (this is a good thing actually).  The SA account must also own the procedure
  • It cannot have any input or output parameters
  • The procedure executes with the same permissions as the sysadmin role
  • It should not return any result sets. It gets execute by the instance so there is not anywhere for the results to go

How To

So how do you configure this wizardry, you ask?  There are two methods to enable this.

  1. You can use sp_configure
  2. sp_procoption

Let’s walk through both of them.

Using sp_configure is simple.  If we look at sys.configurations, we can see whether or not it is already enabled.  It is disabled by default.

From the above, we can see that it is disabled for this particular instance.  This is an advanced configuration setting so we have to ensure that ‘show advanced options‘ is enabled.  Once that is completed, we can then enable the scan for startup procedures.

Hold the phone.  Even after setting it to be enabled, it STILL shows disabled.

This is another gotcha.  You have to restart the SQL Server services in order for the configuration change to take effect.  So, plan accordingly if you have a need to use this feature.

As a side note, if you start SQL Server with the -f flag (which is minimal configuration) the startup procedures will not be executed.  You can also use trace flag 4022 as another option to skip them upon startup.

The other option is to use a system procedure sp_procoption.  The sole purpose of this system procedure is to set or clear a procedure for automatic execution.  Just like using sp_configure, this procedure is fairly simple as well:

To enable a procedure:

exec sp_procoption @ProcName = 'dbo.LoadAllOfTheThingsIntoCache', @OptionName = 'startup', @OptionValue = 'on';

To disable a procedure:

exec sp_procoption @ProcName = 'dbo.LoadAllOfTheThingsIntoCache', @OptionValue = 'off';

As a side note, if the 'scan for startup procs' is not enabled when you use sp_procoption, it will automatically enable it for you.

Just remember that any procedure that is configured to be executed upon startup has to live in the Master database.

Real World 

Have I ever used start up procedures?  No, I have not.  I believe that it’s an edge case usage, however it is always good to know what options you have available to you.  I do think that using it to warm up the cache for highly transaction systems might be useful.

With that being said, here’s the obligatory warning:

This was informational only.  Do NOT go wild and create 100+ startup procedures on your SQL Servers.  Doing so will cause problems.  As we have learned from Indiana Jones, choose wisely.  

If anybody is using start up procedures, I would love to hear about your experience in the comments!

Enjoy!


							
Posted in Administration | Tagged | 7 Comments

I’m Speaking! SQL Saturday #653 – Columbus

Image result for drew carey show ohioI’ll admit it, every time I think of Ohio I have flashbacks to the Drew Carey show.  The show was set in Cleveland, Ohio.  Don’t ask me why.

I’m excited that this weekend, I’ll be traveling to Columbus, Ohio (not Cleveland) for their annual SQL Saturday. This will be my first time attending their event.  It is only 3 or so hours away so an easy drive for me!

They have a great schedule of session put together.   Make sure to check out the schedule and get registered for the event!!

Here is the session that I’ll be presenting in room Roush ‘Fisher 1’ @ 9:45AM, which is one of my favorites:

SQL Server Databaseology: A Deep Dive Into Database Internals

NullPlacementMatters_2Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge will allow you to build better, faster databases.

Honestly, is it every too early to learn about internals?  I think not! Bring your coffee and let’s dive in!

How Can You Beat FREE Training?

Remember that these events are a FREE day of awesome SQL Server training and professional development.  The only cost to you is if you decide to purchase lunch! Check out the schedule and register.  In today’s economy, how can you afford to pass up FREE training?!?!  The answer is that you can’t.  Go register, get out there and learn.  I triple dog dare you.

If you make it out to see my session, make sure to come up and introduce yourself as I’d love to meet you!

See you in Ohio!!

Posted in Presenting, SQL Saturday | Tagged , , | Leave a comment

I’m Speaking! Evansville Technology Group

SQL Server user group is a backbone of the SQL Server community.  I currently help run the Louisville SQL Server user group and prior to that I ran the Omaha SQL Server user group for many years.  I know how difficult it can be to get in person speakers.  Needless to say when I was asked to speak at a group within driving distance, I jumped at the opportunity.

So on Thursday July 20th, 2017, I’ll be making the drive over to Evansville, Indiana where I’ll be presenting for the Evansville Technology Group.  This group is not just SQL Server, but rather a combination of SQL Server, Powershell, and .NET.   I’m really looking forward to presenting for them!  Lunch will be provided!

Database Continuous Delivery & Your Salvation

DSC_0778Continuous Integration & Delivery isn’t a new concept. Application Developers have been doing it for quite a while and now it’s time for Database Professionals to catch up. Whether you are a database administrator, database developer, or even an application developer, database continuous delivery can help you find your salvation. I’ll show you how to apply these concepts to areas of security, collaboration, testing, support and deployments. We will look at what continuous delivery means, demonstrate some tools that can help you easily get it into place, and dive into why it’s important.

If you are in the Evansville, Indiana area over the lunch hour on Thursday July 20th, please stop by!!  You can register for the event here.  If you do decide to go, please make sure to RSVP so that the organizers have an accurate head count for food.

See you there!

Posted in Presenting | Tagged , | Leave a comment

I’m Hosting! Idera July #sqlchat

Have you heard of these following terms and/or phrases?

  • Database Continuous Delivery
  • Database Lifecycle Management (DLM)
  • DevOps
  • [Insert other buzzword bingo entry here]

You probably have.  Even if you haven’t, let’s chat about it!!

Come join me on Wednesday July 19th at 11:00AM CST for Idera’s #sqlchat on Twitter!! We will be discussing database continuous delivery and questions that DBA’s might have about it! If you aren’t familiar the #sqlchat, every month Idera (B|T) hosts an online Twitter chat session.  From Idera:

This #SQLChat will allow members from the SQL community to discuss SQL Server’s biggest topics with IDERA product experts, and have a chance to win a giveaway prize. This month’s giveaway prize is a Google Home!

I’ll be your host for July!  Idera is giving away a Google Home!

Database Continuous Delivery

Database continuous delivery can be a scary topic for database administrators.  This requires the DBA to start trusting tools & processes, and that’s like pulling whitening from a bull ox.  However daunting, this is becoming a hot new buzz term in-conjunction with “devops”.

See you on Wednesday July 19th at 11:00AM!

Posted in Community | Tagged | Leave a comment

T-SQL Tuesday #91 – Start Talking

T-SQL Tuesday is a monthly blog gathering for the SQL Server/Data Professional community  It is the brainchild of Adam Machanic (B|T) and is not limited to just things around the SQL Server database engine. Each month a blogger hosts the event and anybody who wants to contribute can write a post about that month’s topic. You can find a list of all topics at http://tsqltuesday.com/.  

This month’s T-SQL Tuesday topic is about DevOps.  It is being hosted by Grant Fritchey (B|T).

Grant asks some specific questions in this month’s posting:

  1. How do we approach DevOps as developers, DBAs, report writers, analysts and database developers?
  2. How do we deal with data persistence, process, source control and all the rest of the tools and mechanisms, and most importantly, culture, that would enable us to get better, higher functioning teams put together?

We’ll discuss each one, but first a recap.

What exactly is DevOps?  Wikipedia tells us that devops is:  “DevOps (a clipped compound of “software DEVelopment” and “information technology OPerationS“) is a term used to refer to a set of practices that emphasize the collaboration and communication of both software developers and information technology (IT) professionals while automating the process of software delivery and infrastructure changes.

Great.  Now we know what it is.  It is intended to be a set of practices that stress on the collaboration and communication of basically everybody (and I mean everybody) that might be involved in application development and delivery.   Ironically, this set of practices has morphed into an actual job title over the years.  My employer just recently hired a “DevOps Engineer”.

So, let’s go back to Grant’s questions.

How do we approach DevOps as developers, DBAs, report writers, analysts and database developers?

Honestly, I think it’s time for us data professionals to get out of the data center and get on board with the DevOps movement.  Let us take off the cloak of invisibility and get our hands dirty.  If we look at the overall “DevOps” role, we should want to have DevOps in our world.  While it may be a hard journey, in the long run it will make our jobs easier.  How would you like to have push button deployments or less manual code reviews?  I know that I do.

How do you do this?  Easy.  Talk.  Communicate.  Collaborate.  Break it down now.  Go talk to your respective teams (app dev, infrastructure, DBA’s, management, etc) and get them talking.  Everybody will have their own opinion and that is okay.  The important part is to get everybody talking.

If you need a starting point, ask this simple question: “What if we could implement a process to deliver application changes to the wild, that could potentially have little to no impact to our customers?”.  Think about that.  Do you work in an environment where deployments cause outages for your customers? I bet for the majority of us, that is true.

If your customers have a better end user experience, meaning little to no down time, isn’t that the name of the game?

DevOps helps to answer that question.

How do we deal with data persistence, process, source control and all the rest of the tools and mechanisms, and most importantly, culture, that would enable us to get better, higher functioning teams put together?

In my opinion, the answer is simple.  Trust.  As data professionals, we often question the process because once you involve data (the data is the business) then we get very distrustful of changes.  This is probably rightfully so.  How many times as a deployment gone sideways for you?  Ever have to roll back a deployment because it barfed?  I know that I have been there and usually it’s not that much fun.  Hopefully every DBA has a recovery strategy in place to handle such events.

In order to reach a true “DevOps” method to deliver application changes into the wild, we have to get our hands off of it. And I mean OFF.  This means tools must be in place in order to facilitate this. This also means that we have to TRUST the tools to do their job and do it well.   Tools such as Octopus Deploy, Team City, Jenkins, TFS, Red Gate, etc.  All of these third-party vendors pour money, time and effort into making them as rock solid as possible.

Trusting the tools is difficult for most database administrators.  We want to see the guts in deployment, making sure dangerous things do not happen to our precious databases and their contents.

Experience

I was a part of a database continuous delivery project at Farm Credit Services of America in conjunction with Alex Yates (B|T) and Bob Walker (B|T), who have both blogged about the experience.  I have also been pushing a similar project at Farm Credit Mid-America.

I’ll admit when the project started up at FCSA, I was that distrusting, skeptical DBA that thought this would never work.   Yes, it was hard.  It was cumbersome and clunky in the beginning.  Then it got easier and the pieces fell into place.  Iterations to delivery changes to the wild got easier.  Quickly, I began to trust the process and how it was going to work.

I highly recommend Bob’s series of blog posts about the process.  You can find them here. Alex also has several excellent posts around the process here.

Summary

In short, I’m a large support of the DevOps movement especially concerning database lifecycle management (DLM) and continuous delivery.  While difficult, there are organizations out there that can help you get there.  I assure you that the end results will be worth every penny.

Time to shed the cloaks of invisibility.  Start talking.

 

Posted in General Conversations | Tagged | 3 Comments

I’m Speaking! Capital Area SSUG

I like it when I get referred to speak at various functions. Whether it’s a user group, conference or just something informal, I really enjoy speaking (yes, I’m crazy) and giving back to the community.

This time I’ll be speaking for the Capital Area SQL Server User group out of Albany, New York.  This will be a remote session and I’ll be speaking on Continuous Delivery & Your Salvation.

Continuous Delivery & Your Salvation

DSC_0778Continuous Integration & Delivery isn’t a new concept. Application Developers have been doing it for quite a while and now it’s time for Database Professionals to catch up. Whether you are a database administrator, database developer, or even an application developer, database continuous delivery can help you find your salvation. I’ll show you how to apply these concepts to areas of security, collaboration, testing, support and deployments. We will look at what continuous delivery means, demonstrate some tools that can help you easily get it into place, and dive into why it’s important.

If you are in the Albany, New York area tonight from 5:30PM to 7:30PM(ish) please stop by!!  You can register for the event here.   If you do decide to go, please make sure to RSVP so that the organizers have an accurate head count for food.

Speaking of food, the only sad thing about this, is the group is having BBQ this evening. Why is that sad? Because I love good BBQ, I’m missing the food!!

See you virtually there!

 

Posted in Presenting | Tagged | Leave a comment

Reverse It

Recently I had a conversation with a colleague that wasn’t aware of the REVERSE function in SQL Server.  I had sent them a code snippet and they inquired as to what the purpose of the function was.  Essentially, this function that reverses any string value.

For example, the phrase:

The brown dog jumped over the lazy fox

reversed looks like this

xof yzal eht revo depmuj god nworb ehT

Or in Management Studio

Awesome.  Where can I use this?  I use this function when I need to get file names for data or log files of a database.  These are the steps to do this:

  1. Reverse the full path of the file name
  2. Use the LEFT function to return all of the characters left of the first instance of “\”
  3. Reverse the string back to normal orientation
SELECT  name AS 'Database Name' ,
        REVERSE(physical_name) 'Path Reversed' ,
        REVERSE(LEFT(REVERSE(physical_name),
                     CHARINDEX('\', REVERSE(physical_name)) - 1)) AS 'File Name'
FROM    sys.master_files;

In Management Studio,

Voilá!!  We have the file names.  This is also helpful whenever you need to find the tail end of a string that has some type of delimiter.

Enjoy!

Posted in Administration | Tagged | Leave a comment

Ola Hallengren: Output File

I’m a huge fan of Ola Hallengren‘s free utility scripts. If you aren’t familiar with it, it’s a great tool to manage backups, index maintenance and the overall health of your SQL Servers. It is utilized around the globe and it is completely free.

Although his script is very thorough I still like to tweak and One of the things that I tend to change is the way that the log file for each respective job is named.  By default, the file name is configured in the following format:

JobName_JobID_StepID_YYYYMMDD_HHMMSS.txt

which results in file names that look like this:

If you have never looked, these files contain information about any failures or issues the job(s) might have encountered.  The difficult part is that when looking at the list, how do you know which file belongs to which job?  In the example above, one is a FULL backup, a DIFF backup and the third a TRAN LOG backup but it is difficult to determine which is which.  However, you can see that each JobID is a unique identifier, which correlates to the job ID of the specific job.

There are two ways to adjust this.

  1.  Update the maintenance script itself to adjust the file names.  In the script, configuration of job logging starts approximately on line 4127.  Your mileage may vary depending on which version you have.  You can look for the phrase “Log completing information” in the file to determine where it starts.  Once you find its location, you can adjust the file name to however you want.
  2. If you don’t want to adjust the script,  you can use the script below to manually adjust the output file name.  This script will produce the appropriate parameters for sp_update_jobstep.  You can then copy/paste the command into a query windows and execute in a controlled manner.
/***************************************************************
-- Author: John Morehouse
-- Date: May 2017
-- http://landingzone.jmorehouse.com/sqlrus

--THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--IN OTHER WORDS: USE AT YOUR OWN RISK.

--AUTHOR ASSUMES ZERO LIABILITY OR RESPONSIBILITY.

--You may alter this code for your own purposes.
--You may republish altered code as long as you give due credit.
***************************************************************/

-- NOTE: You will need to turn OFF SQLCMD mode if it is enable for this to work

;WITH myCTE AS (
    SELECT 'IndexOptimize - USER_DATABASES' AS 'Name', 'UserDBs' AS 'FilePart' UNION
    SELECT 'DatabaseBackup - USER_DATABASES - LOG', 'UserDBs_LOG' UNION
    SELECT 'DatabaseBackup - USER_DATABASES - FULL', 'UserDBs_FULL' UNION
    SELECT 'DatabaseBackup - USER_DATABASES - DIFF', 'UserDBs_DIFF' UNION
    SELECT 'DatabaseIntegrityCheck - SYSTEM_DATABASES', 'SystemDBs' UNION
    SELECT 'DatabaseBackup - SYSTEM_DATABASES - FULL', 'SystemDBs_FULL' UNION
    SELECT 'DatabaseIntegrityCheck - USER_DATABASES', 'UserDBs' 
)
SELECT output_file_name, REPLACE(output_file_name,'$(ESCAPE_SQUOTE(JOBID))', m.FilePart) AS 'Updated_Output_File_Name'
, 'EXEC sp_update_jobstep @job_id =''' + CAST(sjs.job_id AS VARCHAR(36)) + ''', @step_id= ' + CAST(sjs.step_id AS VARCHAR(10)) + ', @output_file_name = ''' + REPLACE(output_file_name,'$(ESCAPE_SQUOTE(JOBID))', m.FilePart) + ''''
  ,* FROM dbo.sysjobsteps sjs
	INNER JOIN myCTE m ON sjs.step_name = m.name 

Once adjusted, the log files are will be much easier to read, as shown below.  I can now quickly differentiate the log files not only between databases but also types.

Enjoy!

 

 

Posted in Administration | Tagged | 2 Comments

Experienced DBA: Rookie Mistake

M.A.S.H 4077.  Mobile Army Surgical Hospital.  One of my all time favorite TV shows that I still watch to this day.  There is one episode that has always stuck with me is one where Hawkeye has to deal with death of a friend during surgery. Afterwards, Colonel Blake imparts advice to Hawkeye:

While not life threating, I was recently reminded that:

  1.  I’m human and I make mistakes at times.
  2.  Even the most experienced DBA cannot change rule #1.

The other day I had to update some records, in Production.  I’m a firm believer of using explicit transactions and double checking things before committing a transaction.  This helps ensure things go as expected.  This also allows me a way to rollback the changes if they don’t.  It happens.

However, this means that I have to COMMIT said explicit transaction.  And not go to lunch without doing so.

Can you see my mistake?  I bet you can.

I changed a single record in a table, *thought* that I had committed it and went about my day.  I even left the office for an appointment over lunch.  While I was at my appointment, I got a call from a teammate asking if I had being doing anything with this particular database around 11:17 that morning.

Why, yes, yes I was as a matter of fact.

As soon as I saw his number on my phone, I realized my mistake.  An open transaction was left sitting there.  I quickly estimated in my head how much blocking that was generating.  Given this particular database, it wasn’t going to be pretty.

I told my teammate to kill my transaction which would roll back any changes.  I can fix the data again once I got back to the office.  The transaction was killed off and the blocking disappeared.  Problem solved.

I made a mistake.  A rookie mistake at that.

Resolution

I’m a fan of Red Gate tools.  I use SQL Prompt on a daily basis so I created a new snippet to help me remember to handle any transaction and/or check for open transactions.

Now I can use the snippet to automatically insert the syntax for the explicit transaction as well as a check (or reminder) to see what the transaction count is.

Reminder, always, always, check for open transactions before leaving for lunch!!

Learn from my rookie mistake.

Enjoy!

Posted in Administration | Tagged | 7 Comments