Knowing the Options

Have you every executed a query in SQL Server Management Studio, looked at the execution plan, and noticed that it was a different plan than what was generated on the server?

A potential reason for this could be a different option settings.  The options represent the SET values of the current session.  SET options can affect how the query is execute thus having a different execution plan.   You can find these options in two places within SSMS under Tools -> Options -> Query Execution -> SQL Server -> Advanced.

As well as Tools -> Options -> Query Execution -> SQL Server -> ANSI

@@Options

Using the interface to check what is set can get tiresome.  Instead, you can use the system function @@OPTIONS.  Each option shown above has a BIT value for all 15 options indicating whether or not it is enabled.

It would look like this: 001010101111000.  Each bit corresponds to an option in order from the table below:

Options
Value Configuration
1 DISABLE_DEF_CNST_CHK
2 IMPLICIT_TRANSACTIONS
4 CURSOR_CLOSE_ON_COMMIT
8 ANSI_WARNINGS
16 ANSI_PADDING
32 ANSI_NULLS
64 ARITHABORT
128 ARITHIGNORE
256 QUOTED_IDENTIFIER
512 NOCOUNT
1024 ANSI_NULL_DFLT_ON
2048 ANSI_NULL_DFLT_OFF
4096 CONCAT_NULL_YIELDS_NULL
8192 NUMERIC_ROUNDABORT
16384 XACT_ABORT

For example, the right most three bits are 0. These correspond to:

  • DISABLE_DEF_CNST_CHK
  • IMPLICIT_TRANSACTIONS
  • CURSOR_CLOSE_ON_COMMIT

So what does @@options really return?

Running @@OPTIONS on my machine it returns a value of 5496.  Useful right? Ok, not really.  It would be nice to know exactly which options are set and which aren’t without having to go through the UI.

@@OPTIONS takes the binary representation and does a BITWISE operation on it to produce an integer value based on the sum of which BITS are enabled.

Let’s assume for a moment that the only two options that are enabled on my machine are ANSI_PADDING and ANSI_WARNINGS.  The values for these two options are 8 and 16, respectively speaking.  The sum of the two is 24.

You have Options

Thankfully, there is an easier way.  The script below will do the work for you and display what options are enabled.

/***************************************************************
  Author: John Morehouse
  Summary: This script display what SET options are enabled for the current session. 
 
  You may alter this code for your own purposes. You may republish altered code as long as you give due credit. 
 
  THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
***************************************************************/
SELECT 'Disable_Def_Cnst_Chk' AS 'Option', CASE @@options & 1 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'IMPLICIT_TRANSACTIONS' AS 'Option', CASE @@options & 2 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'CURSOR_CLOSE_ON_COMMIT' AS 'Option', CASE @@options & 4 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ANSI_WARNINGS' AS 'Option', CASE @@options & 8 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ANSI_PADDING' AS 'Option', CASE @@options & 16 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ANSI_NULLS' AS 'Option', CASE @@options & 32 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ARITHABORT' AS 'Option', CASE @@options & 64 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ARITHIGNORE' AS 'Option', CASE @@options & 128 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'QUOTED_IDENTIFIER' AS 'Option', CASE @@options & 256 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'NOCOUNT' AS 'Option', CASE @@options & 512 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ANSI_NULL_DFLT_ON' AS 'Option', CASE @@options & 1024 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'ANSI_NULL_DFLT_OFF' AS 'Option', CASE @@options & 2048 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'CONCAT_NULL_YIELDS_NULL' AS 'Option', CASE @@options & 4096 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'NUMERIC_ROUNDABORT' AS 'Option', CASE @@options & 8192 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled' UNION
SELECT 'XACT_ABORT' AS 'Option', CASE @@options & 16384 WHEN 0 THEN 0 ELSE 1 END AS 'Enabled/Disabled'

The output of the script will give you this:

 

 

 

 

 

 

 

 

 

Next time you are investigating an execution plan, remember to check to see your options are.

Enjoy!

Posted in Administration | Tagged | 3 Comments

T-SQL Tuesday #90 – The Elephant’s Name is Database

T-SQL Tuesday is a monthly blog gathering for the SQL Server/Data Professional community  It is the brainchild of Adam Machanic (B|T) and is not limited to just things around the SQL Server database engine. Each month a blogger hosts the event and anybody who wants to contribute can write a post about that month’s topic. You can find a list of all topics at http://tsqltuesday.com/.  

This month’s T-SQL Tuesday topic is about continuous database deployments and Database Lifecycle Management (DLM).  It is being hosted by James Anderson (B|T).

The Elephant

In my experience, the database is the final thing to be thought of when it comes to continuous deployment. Nobody wants to tackle it and it can be hard to get right.

It is the elephant in the room.

I’m sure that we all have been a part of a deployment where the database wasn’t deployed successfully.  If it was a large deployment meaning lots of changes (both schema and data), fixing things on the fly probably (and did) suck.

Thankfully, available tools have come a long way regarding databases and how they are being deployed.  These tools, such as Red Gate, Octopus Deploy, Team City (just to name a few) have made amazing strides in helping organizations implement database lifecycle management into their environments.

I’m a Fan

While realizing that continuous database deployments can be difficult, I’m a huge fan of them.  I’ve been pushing my current employer to get them implemented for some time.

In a nutshell, there are several benefits from having continuous database deployments:

  1. Source Control.  Source Control.  Do I need to repeat this again?  The database IS code. It belongs in a source control system of your choice.  Yes, it can be difficult.  Do it anyway.  You’ll thank me later.  Note: Backups are NOT a form of source control.
  2. Make It Suck Less (MISL).  I’m borrowing this phrase from a former co-worker (thanks Matt Meyer!) and it’s one that I use often.  Database continuous delivery makes my life easier as a DBA.  If I have the appropriate tools and processes in place, deployments become a point and click event.  Heck, you can even SCHEDULE it.  See #4.
  3. Process consistency.  I’m a DBA that supports multiple application development teams.  Each team wants to deploy database changes differently.  Some use DACPAC’s others use scripts.  This causes a bottleneck issue simply because there is (currently) one of me and 40 developers.  By having DLM in place, EVERYBODY follows the same path to get things into Production.  As a side note, I’m also a believer that the DBA’s should follow the same path, even for their own administrative databases they control.  Just because you might have sysadmin rights to production doesn’t mean you get to go outside the process in a non-emergency situation.
  4. Trusting the tools.  I need to trust the tools to ensure that things go smoothly.  This is hard for DBA’s I think.  We want to see what is changing.  We want to touch it, examine it, play with it, throw it against the wall and see what sticks.  We become the bottleneck for Agile environments.  If we trust the tools that are in place, we remove the bottleneck from the process and we help our development teams to become even more Agile.
  5. Security.  I’m a stickler for permissions and maintaining security in all environments.  Continuous database delivery helps me to lock things down.  If the process is implemented and the tools are trusted, then developers don’t need modification permissions to anything in any environment.  Sure, give them read permissions, but the ability to change things outside of source control should be prevented.  If you don’t lock things down to the process itself, mass chaos can erupt.

All of these helps to address the elephant in the room.

The Switch

Ok, I’ll admit that I wasn’t always sold on database continuous delivery.  I was that skeptical database administrator that thought it was all hokum.  That all changed.

Several years ago I was a part of a project at Farm Credit Services of America to ease deployment pain.  At the time, we were deploying things daily  and the database was definitely a bottleneck.  It was all very manual.

Then I got to meet Alex Yates (B|T) and Sean Newham (T) both from Red Gate at the time.  We were able to construct an agreement with Red Gate to get them onsite (all the way from the UK!) to help us architect a continuous delivery process.

After spending 3 days in a room with Alex, Sean and a team of developers, I was completely sold.  Alex and Sean clearly and efficiently explained all of the parts of the process and how things should work.  It was VERY clear to me that database continuous delivery would deliver all the points I’ve listed above.  Simply put, it would make my job EASIER!!!

Easier.  Let that sink in for a moment.  Why wouldn’t you want a process that makes things suck less?

[Plug] Alex now runs DLM Consultants and I can’t recommend him enough.  If you want to implement database lifecycle management in your organization but don’t know where to start, he can definitely help you out.

Summary

I’m a die-hard operational DBA.  I love the internals of SQL Server.  That being said, I am completely sold on the value that database continuous delivery brings to the table.  By implementing it correctly, you can secure your environment, make things suck less, trust the process and remove the database as a potential bottleneck.  If you do not have it in place currently, I would urge you to at least take a look at it.

It’s time we as DBA’s kicked the elephant out of the room.

 

 

Posted in TSQL Tuesday | Tagged , , | Leave a comment

I’m Speaking! SQL Saturday #617 – Pensacola

In early June, I’ll be traveling to Pensacola, Florida for their annual SQL Saturday. This will be my second time attending their event.  Last year was fantastic and I am looking forward to returning.

The sun, the beach, and of course SQL Server training! How can you beat a weekend in sunny Florida AND have free training as well? The answer is, you can’t!

Here is the session that I’ll be presenting in room 2171 @ 8:00AM, which is one of my favorites:

SQL Server Databaseology: A Deep Dive Into Database Internals

NullPlacementMatters_2Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge will allow you to build better, faster databases.

Honestly, is it every too early to learn about internals?  I think not! Bring your coffee and let’s dive in!

How Can You Beat FREE Training?

Remember that these events are a FREE day of awesome SQL Server training and professional development.  The only cost to you is if you decide to purchase lunch! Check out the schedule and register.  In today’s economy, how can you afford to pass up FREE training?!?!  The answer is that you can’t.  Go register, get out there and learn.  I triple dog dare you.

If you make it out to see my session, make sure to come up and introduce yourself as I’d love to meet you!

See you at the beach!!

Posted in Community, Presenting | Tagged , , , | Leave a comment

SSIS – Adding Connections

Sometimes you get reminded that you don’t know everything about SQL Server and it’s various products Every time I run into something like this I am going to write a quick blog about it.  It’s something new I’m trying

While recently examining an existing SSIS package  before implementing some needed changes I noticed the connection manager for multiple flat files.

Normally, when processing multiple files, I use a sequence container and loop through them.  It’s usually a cumbersome process to setup and you must configure variables.  I was not aware the multiple flat file type of connection manager existed. No wonder because it’s notwithin the normal list.  This handy thing could only be found when I looked under “New Connection”.

Right click in the blank space in the Connection Manager and select “New Connection” from the resulting dialog window.

Suddenly a whole new list of connections is now available!

Just a reminder that it benefits, at times, to click around to discover new things!

Enjoy!

Posted in SSIS | Tagged | 1 Comment

Quick Script: Finding Specific Columns

I like scripts.  Scripts that help me do things.

Working in the financial industry, there are always things that you have to ensure are protected.  More specifically, social security number and date of birth.

If your environment is like most, there are legacy applications that have a database behind the scenes that when developed 20 years ago didn’t take this into account.  Yes, these types of systems still exist.  I promise you that.

Anyway, I wrote this quick script to interrogate each database on the server to look for a specific column name pattern.  Keep in mind, you might need to adjust the data types.  In the script below, I was looking for social security numbers and they are usually stored as a string.  This is due to the fact that a social security number can start with a zero and SQL server will happily truncate that off if it is stored as a numeric value.

NOTE:  I’ve run this on production systems before and have zero issues.  However, your mileage may vary so run it at your own risk.  Do not run code you find on the internet on production systems without testing it first.  

/***************************************************************
  Author: John Morehouse
  Summary: interrogate each database looking for a specific column name
 
  You may alter this code for your own purposes. You may republish altered code as long as you give due credit. 
 
  THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
***************************************************************/
DECLARE @dbs AS TABLE ( name sysname, processed BIT)
DECLARE @x INT = 1
DECLARE @sql VARCHAR (2000)
DECLARE @dbName VARCHAR (50)

IF object_id ('tempdb..#results') IS NOT NULL
BEGIN
DROP TABLE #results
END

CREATE TABLE #results ( dbName sysname
, tableName VARCHAR (100)
, columnName VARCHAR (100)
, DataType VARCHAR (100)
, MaxDataLength INT
, theRowCount INT )

INSERT INTO @dbs ( name, processed )
SELECT name, 0 FROM sys.databases
WHERE database_id > 6
AND [state] = 0 --online

WHILE @x <= (SELECT COUNT( 1) FROM @dbs WHERE processed = 0 )
BEGIN
SELECT TOP 1 @dbName = name FROM @dbs WHERE processed = 0

SET @sql =
'INSERT #results (dbName, tableName, columnName, DataType, MaxDataLength, theRowCount)
SELECT ''' + @dbName + ''',t.name,c.name,st.name, c.max_length,p.rows
FROM [' + @dbName + '].sys.columns c
INNER JOIN [' + @dbName + '].sys.tables t ON c.object_id = t.object_id
INNER JOIN [' + @dbName + '].sys.systypes st ON c.user_type_ID = st.xusertype
INNER JOIN [' + @dbName + '].sys.partitions p ON t.object_id = p.object_ID
INNER JOIN [' + @dbName + '].sys.allocation_units au ON au.container_ID =
CASE
WHEN au.TYPE IN (1,3) THEN p.hobt_id
WHEN au.type IN (2) THEN p.partition_id
END
WHERE (c.name LIKE ''TIN%''
OR c.name LIKE ''TIN_TX%''
OR c.name LIKE ''%SSN%'') -- looking for specific column name
AND c.OBJECT_ID > 100 -- exluded system objects
AND st.name IN (''varchar'', ''nvarchar'') -- due to leading zeros, should be n/varchar data type
AND p.index_id IN (0,1) -- Heap or Clustered Index
AND au.type = 1 --IN_ROW_DATA only'

--PRINT @sql
BEGIN TRY
EXEC (@sql)
END TRY
BEGIN CATCH
SELECT ERROR_LINE () AS 'Error Line'
, ERROR_MESSAGE () AS 'Error Message'
, ERROR_NUMBER () AS 'Error Number'
, @dbName AS 'Database'
END CATCH

UPDATE @dbs
SET processed = 1
WHERE name = @dbName
END

SELECT * FROM #results
GO

Enjoy!

Posted in T-SQL | Tagged | Leave a comment

I’m Speaking! Code PaLOUsa!

This summer I’m going to try something a little different.  I’ll be speaking at a technology conference that is not specifically for SQL Server.  That’s right, I’m branching out.  But don’t you worry, I’ll still be talking about my favorite subject, SQL Server Internals!

So I’m happy to say that I’ll be speaking at Code PaLOUsa (see what they did there? LOU? Louisville?) right here in Louisville, Kentucky on June 8th, 2017 at 3:15PM.  While I’ve never had the opportunity to attend in previous years, I’m told it is an outstanding conference.  I am really looking forward to speaking there!

SQL Server Databaseology: A Deep Dive Into Database Internals

NullPlacementMatters_2Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge will allow you to build better, faster databases.

But I’m a Developer!

Ok, so what?  Do you build databases within SQL Server to support your awesome applications?  Probably.  Ever think that those database could perform better?  If so, this is the session for you!  We’ll talk about how SQL Server handles things and why a good database design is critical to your application!

Trust me.  This is a session any who builds SQL Server databases should attend.

Training

While this conference isn’t free like SQL Saturday’s are, the price for two days of training for $300 is a good deal.   This is a small invest in your career so if you are in the area and want to learn about SQL Server internals among other things, make the investment and come see me.

Make sure to introduce yourself.  I love to meet new people!!

See you there!!

Posted in Community, Presenting | Tagged , , , | Leave a comment

T-SQL Tuesday #89 – There is No Cloud

T-SQL Tuesday is a monthly blog gathering for the SQL Server/Data Professional community  It is the brainchild of Adam Machanic (B|T) and is not limited to just things around the SQL Server database engine. Each month a blogger hosts the event and anybody who wants to contribute can write a post about that month’s topic. You can find a list of all topics at http://tsqltuesday.com/.  

This is the 89th MONTH for T-SQL Tuesday’s, which is a great representation of the #sqlfamily and how we love to share our knowledge and opinions.

This month’s T-SQL Tuesday topic is about how the global shift to the Cloud and/or automation will affect today’s DBAs and comes from Koen Verbeeck (B|T).

Let me be upfront.  As a DBA, I do not fear the cloud.   After all, there really isn’t the cloud, it’s just someone else’s computer.  I don’t shudder in terror when someone mentions moving to the cloud, rather I embrace it if it made sense for the project/problem at hand.  I think the cloud can solve issues such as hardware limitations, licensing or budget constraints.

While I have messed around with the cloud to some extent, it is still a database sitting on a SQL Server.  It just happens that the server isn’t in my local DR center.  I don’t view this any different if I have to work on my DR server which is located many miles away.

I work for a fairly slow moving financial institution.  This does not me we don’t adopt new technology but the leadership is very careful when deciding to move in a certain direction. Since we service rural America farmers, these decisions could have a huge impact on the ability of our customers to operate.    The cloud, at least from a database perspective, is not something that I think is even on the radar.  I believe that we will get there eventually, but not in the next year or two I would imagine.

Of course, this also means that I don’t get the shiny new cloud toys to play with either.  I have had the ability to work with the cloud some years ago on a side project, but that was very limited.  It was also at a time where Azure was fairly young and not as robust as it is today. Learning new skills around the Cloud is on my to-do list and one of these days I’ll get to it.  I think with the help of MSDN, it’s a lot easier to play around with new technologies.

As far as automation is concerned, I’m all for it.  I tend to have a Make It Suck Less (MISL) mentality and will automate anything under the hood that I can.  This just makes my day to day job easier which in turns allows me to focus on what I enjoy doing.  Architecting and digging around the internals of SQL Server.  This is my passion, well that, and presenting/teaching.

There really isn’t that much of a change for my current day to day job.  I would look forward to the ability to do more with it and expand my knowledge base, though.

Let’s face it, whether it is a database in Azure or SQL Server running on an Azure VM, it’s still a database and it is still SQL Server.  Either way you slice it, there needs to be a DBA somewhere in the picture to make sure things run smooth.

As far as the robots, well, I’ve got a bat for when SkyNet takes over. 😉

Posted in TSQL Tuesday | Tagged | 1 Comment

Schedulers To Rule Them All

Sometimes things just don’t make sense.

One of our production servers has 4×8 (4 sockets, 8 core) processors with hyper threading enabled.  This results in 32 physical cores or 64 logical cores.  Therefore, this particular server should have 4 NUMA nodes available.

bufferHowever, I was looking at the dashboard for this particular server through SentryOne.   I happened to notice that the server was only showing 3 buffers for SQL Server to utilize.  If you look at the  image, the 3 purple blocks on the left represent the number of buffers the server has.  Each block  reflects a one to one ratio of buffer to a NUMA node.  If you have two NUMA nodes, you would have two blocks represented.

As you can see, I’ve got 3 blocks, so 3 buffers are aligned with 3 NUMA nodes, which is not expected.  Where is the fourth buffer?  Did it just not show up for work?

So I started to investigate it.

SQL Server is pretty versatile.  It’s got dynamic management views (DMVs) that will tell you a number of things if you know where to look.  In this case, I needed to look at those that told me about the underlying core structure.  Namely, these:

Investigation

numa_nodesThe DMV sys.dm_os_nodes did indeed confirm that the server has 4 NUMA nodes available.  SQL Server sees that there are four of them.  Node ID of 64 represents the Dedicated Administrator Connection (DAC).  This node guarantees that if you need to use the DAC, you’ll have resources available to do the work.

Also notice that node_id of 3 is currently set to “OFFLINE“.  That’s curious.  We will come back to that.

You could also use the sys.dm_os_schedulers DMV to find out how many nodes it has:

SELECT DISTINCT parent_node_id
FROM sys.dm_os_schedulers

I have now confirmed in two different ways that SentryOne should be reflecting 4 memory buffers.  Let’s take a closer look at what this DMV is telling us.  A scheduler is essentially a gatekeeper.  Schedulers coordinate processing time on the processors for worker processes.  There is a one to one correlation between the number of cores and the number of schedulers.  If you have a 32 core system, you would have 32 schedulers to process end user requests.

Just as a side note, while there is a direct link between the schedulers and the processor, this does not mean that each scheduler is limited to that specific processor.  A scheduler assigned to processor #1 could end up work on processor #2 assuming that the affinity mask IS NOT set.

In this DMV, I happened to notice that each scheduler also has an available status.  These statuses are:

  • HIDDEN ONLINE
  • HIDDEN OFFLINE
  • VISIBLE ONLINE
  • VISIBLE OFFLINE
  • VISIBLE ONLINE (DAC)
  • HOT_ADDED

Hidden schedulers are used to process requests that are internal to the engine itself.  Visible schedulers are used to handle end-user requests.  When you run the casual SELECT * query, it will utilize a visible scheduler to process the query.  With this information, if I have a 64 core server and all is well, I should have 64 visible schedulers online to process requests.

offline_schedulersHowever, I discovered that some of the schedulers were set to “VISIBLE OFFLINE”.  This essentially means that those particular schedulers are unavailable to SQL Server for some reason.   How many offline schedulers do I have? A quick query resulted in 24 schedulers currently offline.  24 logical cores means that 12 physical cores are offline.

But why would a scheduler be set to “VISIBLE OFFLINE”?

I wasn’t able to find anything really online as to why SQL Server would only have 40 logical cores online and available.  So, I started to wonder if SQL would dynamically spin these up if it was under CPU pressure.

After my google-fu failed me and I had some conversations with Chris Yates (B|T), we thought it was a good idea to turn to my good friend David Klee (B|T) at HeraFlux Technologies.  David is a longtime friend from Omaha, Nebraska and is wicked smart about this stuff.   As a side note, being in the SQL Server Community and knowing who to call, makes a world of difference.

Later that evening David called me and we discussed what I was seeing.   Within about 15 minutes David asked which ISO we had installed on the server.   Given that the server was installed and running prior to my starting at the company, I wasn’t sure which ISO all I knew was that it was SQL Server 2012 Enterprise Edition.  David continued to tell me that there are two different versions of SQL Server 2012 Enterprise ISO’s.  One was core based which allowed you to utilize all of the cores available to the server.  The other version would limit you to 20 physical cores or 40 logical cores with hyper threading.

Wait.  What?  I’m sure my jaw hit the floor as I have never heard of that.  David gave me some things to check the following morning.

You Can’t Handle The Truth

online_schedulers_1The next morning,  I checked the core counts again, and there it was.  40 cores online.  Things were starting to make some sense.

One of the things that David said to check was the server log.  If you are using the one that limits the number of cores, there will be an entry in the log upon start up.  Given that we had just recently patched the operating system, I could easily check to see if that message there.

error_log

Click to Enlarge Image

Shoot.  The incorrect ISO was installed and we are limited to only 40 logical cores. This means that a portion of our 64 core we can’t even utilize to process queries.  This also potentially means that we could experience other issues due to the fact that the work load is now not balanced across all of the processors.  We haven’t experienced this (yet) but I suspect as time goes on we would start to see some evidence of this.

Now that I had a better understanding of what had happened, I change my google queries and was able to locate some references to this particular “issue”.

https://blogs.msdn.microsoft.com/sql_shep/2012/06/21/sql-server-2012-license-core-limitaion/

From that blog post, it sounds like an easy fix, however we will have to test it some fashion as this is one of our Production SQL Servers.  And it’s clustered.

Update 2017-04-01 (not an April Fool’s Joke):

Jonathan Kehayias (B|T) has a excellent article on this very subject over at SQLPerformance.com.  I highly recommend taking  a look.  One thing that he mentions that I didn’t consider, is that now the work load is unbalanced across the remaining online CPU’s.  This in of itself can cause issues.  Take a look!

Summary

If you have SQL Server 2012 Enterprise edition and you have more than 20 physical cores available, you might want to check to ensure the correct ISO was installed.  If not, you could be licensing cores that you can’t even utilize.  Who knew?

Here’s script that will help you determine whether or not you need to investigate further. This should work with SQL Server 2008 and up.

SELECT CASE WHEN COUNT(1) < ( SELECT COUNT(1)
FROM sys.dm_os_schedulers
WHERE [status] IN ( 'VISIBLE OFFLINE',
'VISIBLE ONLINE' )
) THEN 'ERROR - Offline Schedulers Detected'
ELSE 'SUCCESS! No offline schedulers Detected'
END AS 'Status Check'
FROM sys.dm_os_schedulers
WHERE [status] = 'VISIBLE ONLINE'
GO

Stay tuned to see how the fix goes!

 

Posted in Administration | Tagged | 6 Comments

May MVP – Bob Pusateri

Microsoft_MVP_logoWhoa. I’m way behind in my blogging.

For the month of May, my MVP choice is Bob Pusateri (B|T).

Bob is a SQL Server DBA that hails from the windy city of Chicago, Illinois. He is very active within the SQL Community as an avid speaker and blogger. I’ve had the pleasure of getting to  know Bob over the years and found he is always more than willing to help out wherever he can.   Conversations with Bob are very enlightening because we both are very interested in SQL Server internals. We have  have even discussed doing a joint session together.

Bob is also involved in his local SQL Server user group.  You can also find him on Twitter helping with questions on the #sqlhelp hash tag from time to time.

If you see Bob around an event, definitely go up and introduce yourself.  I think that Bob is deserving of this nomination due to his passion for sharing his knowledge and his continued support of the SQL Server community.

Additional Highlights:

  • Bob is a Friend of Red Gate (FoRG) for 3 years running
  • Speaking at the 2016 SQL PASS Summit in Seattle, WA
  • Published author – Tribal SQL
  • Certified SQL Server Master – SQL 2008
  • Speaker of many SQL Saturday events
  • Speaker at Virtual Chapters
  • Moderator for 2015 24-Hours of PASS
  • Member of the PASS Summit Program Committee for 5 years

Nominate Someone

As I have done previously, I challenge you to nominate someone in your circle that is worthy.  Even if it isn’t for the MVP award, nominate them for something, anything.  Even if you just simply email them telling them Thank You, I assure you it’ll make a difference to them.

If you do want to nominate someone for the Microsoft MVP award, you will need to know their email address and can submit the nomination here.

Posted in Community | Tagged | Leave a comment

I’m Speaking! SQL Saturday #548 – Kansas City

sqlsat548_webThis weekend I’ll be traveling to Kansas City, Missouri for their annual SQL Saturday. This particular event is like going home for me as my first time speaking at a SQL Saturday was in Kansas City.

One of my favorite things about this event is the BBQ Crawl the organizers host on the Friday before.  It’s similar to a -“bar crawl“, – where you hop from one bar to another, except this is done at BBQ establishments around the Kansas City metro area.  Since I’m a lover of all things BBQ, this is right up my alley and an excellent way kick off the event.

Here is the session that I’ll be presenting in room ARP 212 @ 10:00AM, which is one of my favorites:

SQL Server Databaseology: A Deep Dive Into Database Internals

NullPlacementMatters_2Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level.  You will walk away with a better understanding of how SQL Server stores data and that knowledge will allow you to build better, faster databases.

How Can You Beat FREE Training?

Remember that these events are a FREE day of awesome SQL Server training and professional development.  The only cost to you is if you decide to purchase lunch! Check out the schedule and register.  In today’s economy, how can you afford to pass up FREE training?!?!  The answer is that you can’t.  Go register, get out there and learn.  I triple dog dare you.

If you make it out to see my session, make sure to come up and introduce yourself as I’d love to meet you!

See you in Kansas City!!

Posted in Presenting, SQL Saturday | Tagged , | Leave a comment