SQL Server Read Scale Availability Groups (AKA Cluster less AGs)

What are Read scale Availability Groups ? (Also referred as Cluster less availability groups many times)

In this blog post let’s see what read scale AGs are and few scenarios when they can be useful for us and how to actually create them. So what exactly is a read scale AG? Well, Microsoft introduced these AGs in SQL 2017 just to serve a single purpose which is to scale our read work loads. These AGs do not offer HA capabilities which we get with a typical AG deployment running on a failover cluster. Since there is no cluster behind the scenes, there is no concept of health checks and hence no automatic failovers. To keep it very simple, consider them as Not Always On AGs 🙂

Well, when can I use Read Scale AGs?

Think of a scenario where all you want is to just isolate your read workload for a given database(s) which doesn’t have any HA requirements and your users and applications are okay connecting directly to your secondary replica for read/reporting work loads, do you really need clustering here? If you are absolutely sure that your AG is not being used for making your database highly available, why to have a cluster and why deal with it?

Note: Read scale AGs do not offer high availability capabilities, This is not for your mission critical database(s) which require HA with automatic failover capabilities. If you have to throw HA in the mix, just stick to traditional AGs.

If not for HA, How about Disaster recovery?

we can certainly get the DR capabilities with Read scale AGs. The reason I say that is, we can have synchronous commit setup which pretty much guarantees the same RPO as a traditional AG would do. Even in a traditional setup when ever I think DR, it’s a manual failover for me….YMMV.

Aaaalright, let’s get into action. I have two standalone SQL Server 2019 Instances joined in the same domain which are not participating in any kind of clustering…you know, just two simple VMs joined to my domain running Windows/SQL, nothing fancy.

First thing first, I enabled AG feature by going to SQL config manager(I did this on both servers), you can see it says “This computer is not participating in a failover cluster”. Duuuh!

I have a database (Not_AO_DB) on which I want to scale my read/reporting work load. well, I went ahead and restored the database manually with norecovery on my second Instance as part of preparing the DB to join in my AG later on.

Nothing new so far right, Now….from SSMS, I choose to use AG creation wizard, gave my AG a name and notice the Cluster type selection in below screenshot, that should be NONE.

Proceed further and select the database(s) that you want to join in AG, I have “Not_AO_DB” database in my case.

Now notice the highlighted Failover mode, you can see there is only one option(Manual) which can’t be changed(Remember, no clustering means no built in Automatic failover). Set your desired Availability mode to either Synch or Asynch.

It is important to remember, All the good old rules for endpoints like TCP port communication and account permissions etc etc still needs to be in place. The actual SQL Server AG mechanics are same here with read scale AGs.

Time for rules check…Next…Next…Finish!

That’s it folks! That’s pretty much what it takes to create a read scale Availability group. Below is how my dashboard looked like after successful creation of my AG.

In the next post, let’s see how to failover a read scale availability group, Listener GOTCHAS and learn how it is different from a traditional AG.

Happy holidays!

Advertisement

Configure/Manage Error logs – SQL Server on Linux.

How to configure and manage SQL Server Error logs in Linux environment:

In this blog post let’s see how to configure and manage SQL Server error logs on an Instance running in Linux environment. I am sure most of you are already recycling (re-init) your error logs (exec sp_cycle_errorlog) on regular basis by scheduling a job and controlling the number of logs as needed on SQL@Windows. Well, If you are not doing so, please go ahead and do yourself a favor…configure your error logs. By default on SQL@windows, it keeps 6 archives (128 in SQL@Linux). Imagine where your SQL got restarted multiple times(let’s say 6 times) in short duration for whatever reason, Oops! your precious logs are already gone. See below screenshots on how to do that for SQL@windows,
Using object explorer in SSMS:


Using T-SQL:
USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 20
GO

Now let’s move on to SQL@Linux. If you notice the above T-SQL code, SSMS is basically updating windows registry values behind the scenes which isn’t possible in Linux. (Registry is an exclusive windows OS thing…Duh…..!) See below screenshot, you don’t even get configure option in SSMS under SQL Server logs (BTW, Did you notice the tiny Penguin!).

On my lab server with default settings, You can notice that I have 82 error logs as of now.(By Def, max 128 in SQL@Linux and 6 in SQL@Windows)!

Let’s say I want to set it to 20,  what I have to do is, use mssql-conf utility and specify the desired number.

sudo /opt/mssql/bin/mssql-conf set errorlog.numerrorlogs 20

The changes will take effect after the service restart and you can validate by reading the contents of mssql.conf file as shown below.

Note: Old error log files should be manually removed or moved as needed.

Hope this helps, Happy Monday!

SQL Server 2019 – VLDB struck in “In Recovery”!

Getting straight to the point, Chances are very high that you will run into the issue that we are going to see in this blog post when you try to restore a database larger than 30 TB running on SQL Server 2019 CU1.

Environment:

SQL Server 2019 CU1
DB1 – 6 TB.
DB2 – 30+TB.

Behavior:

Restoring 6 TB database – Everything goes normal as it should.
Restoring 30 TB database – Database stays in “In Recovery” state indefinitely. Infamous “Non yielding scheduler” stack dump gets created when the recovery process enters roll forward(REDO) phase. To give you an idea, look at below log entries.

Message

Recovery of database ‘VLDB’ (8) is 0% complete (approximately 138 seconds remain). Phase 2 of 3.
Recovery of database ‘VLDB’ (8) is 22% complete (approximately 7 seconds remain). Phase 2 of 3.

***Unable to get thread context for spid 0
* *******************************************************************************
** BEGIN STACK DUMP:
* MM/DD/YY HH:MM:SS spid 4368
** Non-yielding Scheduler
** *******************************************************************************
Stack Signature for the dump is 0x00000000000001BC
External dump process return code 0x20000001.
External dump process returned no errors.
Process 0:0:0 (0x110c) Worker 0x000001B39D42C160 appears to be non-yielding on Scheduler 1. Thread creation time: 13234458222710.
Approx Thread CPU Used: kernel 0 ms, user 70375 ms. Process Utilization 16%. System Idle 79%. Interval: 70579 ms.
Process 0:0:0 (0x110c) Worker 0x000001B39D42C160 appears to be non-yielding on Scheduler 1. Thread creation time: 13234458222710.
Process 0:0:0 (0x110c) Worker 0x000001B39D42C160 appears to be non-yielding on Scheduler 1. Thread creation time: 13234458222710.
Process 0:0:0 (0x110c) Worker 0x000001B39D42C160 appears to be non-yielding on Scheduler 1. Thread creation time: 13234458222710.
Process 0:0:0 (0x110c) Worker 0x000001B39D42C160 appears to be non-yielding on Scheduler 1. Thread creation time: 13234458222710.
Process 0:0:0 (0x110c) Worker 0x000001B39D42C160 appears to be non-yielding on Scheduler 1. Thread creation time: 13234458222710.

Note: I noticed this exact behavior consistently on multiple servers irrespective of what method we use to restore this DB.
Native restore – Fails
Attach/Detach – Fails
Restored using EMC DDBoost – Fails
Restore virtual database by presenting virtual data copies using third party tools such as Delphix – Fails.

Fix:
Drop the database which got struck in “In recovery” and apply 2019 CU4 or above.

I am not sure at what size SQL 2019 CU1 starts to panic, but it was happy with 6TB DB and caused misery for 30 TB DB. BTW…At the time of writing this blog post, we have CU5 available which I strongly recommend if you are planning to patch.

 

Happy Monday!

 

Uninstalling CUs/Downgrading SQL Server in Linux(RHEL7)

Back in 2011 I wrote an article on how to uninstall service packs in version SQL 2008/R2 which was quite a popular post at that time based on my wordpress stats. (Well, SQL 2008 was the very first version that allowed us to do that) Fast forward to year 2020 where we can have SQL Server running on Linux in production environments, let’s see how to achieve the same if you ever want to uninstall a cumulative update. Did you notice, I said CU, not service packs this time. See this post for more info on MSM.

First thing first, let’s see what we have got on the box. Pulling that information is quite simple, Run “sudo yum info mssql-server” which should return something like this.

Of course you can get this info from SQL or several other ways in Linux. Okay, now we know we got SQL Server 2019 CU5 running on this server to work with. Let’s just assume CU5 broke something in my database and I want to go back to CU4. How do I do that?

Run “sudo yum downgrade mssql-server-<your_desired_version_number>.x86_64“. Okay, so how do I get those version number details? Microsoft has those details maintained and updated regularly in their release notes. Check below links for 2019 or 2017 based on your version.
SQL 2019 release notes.
SQL 2017 release notes.

Also, We can get this information directly using YUM super powers as shown below…
sudo yum list mssql-server –showduplicates

If you want additional details for individual packages as shown below, just replace “list” with “info”, the command would be “sudo yum info mssql-server –showduplicates

Now that we have all the information what we need, let’s go to the actual fun part.

Downgrading from SQL 2019 Cu5 to CU4:
Command:
sudo yum downgrade mssql-server-15.0.4033.1-2.x86_64

As you can see it clearly states that CU5 was removed and CU4 was downloaded and installed successfully 🙂

BTW, the basic golden rule still remains the same,  you can’t downgrade to a lower version(from SQL 2019 to 2017, –> Nope, that’s not how downgrade works)! You are allowed to do whatever you want as long as you are staying at same version.

Always On availability group(s) – Rolling upgrades

Aim: To upgrade/migrate (side-side) SQL Server 2014 Availability group(s) running on Windows Server 2012 R2 to SQL Server 2019 running on Win 2016 with the least amount of downtime.

Couple of years ago, I wrote a blog post explaining how to upgrade Windows OS from 2012 R2 to 2016 on nodes participating in fail over clustering with minimal downtime using rolling upgrade technique. In this blog post I will be sharing something similar but throwing SQL Server availability groups into the mix. So let me briefly explain what we are trying to achieve here.

Current environment:
I’ve a two node Failover cluster (Windows Server 2012 R2) hosting SQL Serevr 2014 Always On Availability Group with synchronous commit mode. I have a listener configured for my applications to connect. These replicas are running on the latest build of SQL 2014 as of the date this post is published.

As you can see, W12SQL2016A/B are my two replicas(Nodes) which are running Win2012R2+SQL 2014.

Originally I thought of Installing SQL 2016(hence the host names), but ended up installing SQL 2014 for now based on our specific requirement. I didn’t want to change the host names as I had my windows Fail over cluster all setup by this time and I really don’t want to deal with fixing any annoying errors that might popup because of messing up my host names of my nodes. Anyways…the bottom line is I have SQL 2014 AG running on Win 2012R2 which needs to be upgraded/migrated to SQL 2019 running on Windows 2016.


Listener Name: RestAGL

Goal:
To upgrade these SQL Instances to 2019  running on windows server 2016 with a very minimal downtime and no configuration changes for the App teams, assuming In-place upgrades are not allowed.

What’s the high level plan:

Take Full Backups.

  • Add W16SQL2019A and W16SQL2019B nodes to the same windows cluster leveraging mixed mode.
  • Install SQL 2019 and add these two nodes as replicas at SQL Server AOAG layer.
  • Join the databases and let the magic happen.
  • During the final cutover date/time, failover to SQL 2019 and remove the old replicas from AG.
  • Evict both windows 2012R2 nodes from the cluster and raise the functionality level to 2016.

Now, let’s see this in action one step at a time.

Below is the screenshot of all my SQL Instances which I will be working on. To begin with I have two brand new SQL Server 2019 standalone Instances(W16SQL2019A and W16SQL2019B), on which I just enabled HADR feature.

Let’s go, I added the new Win2019 nodes to the existing windows failover cluster which is running on 2012 R2 functionality level.

Note: You don’t want to run in mixed mode of WSFC for long periods. Microsoft might not support if you stay in mixed mode for more than 4 weeks. This is only to perform rolling upgrades to make your systems really highly available. Wrap up the entire process in a day or two and be done with it.

This is expected.  For more details on this, hop on to the blog post that I provided in the beginning of this blog post.

Now it’s time to jump into SQL Server to add these servers as replicas into our AG.

Awesome, so far so good 🙂

Let’s move on….Connecting to one of the SQL 2019 instances, below is what I have. Oops!!


I had to join the individual databases manually to the AG from both the newly added SQL 2019 replicas. Once that is done, below is how my dashboard looked like.

Also, I changed the failover mode to manual just to make sure cluster has no control over failing over my AG. I want to have total control over how and when to failover my AG till the entire upgrade process is complete. Hey BTW, did you take Full Backups?

Did I mention, I have a table called “McD” in “American” database with one row in it? See below…

Now comes the fun part. Set one of the SQL 2019 Instances availability mode to Synchronous commit and perform a controlled manual failover. In my case, I selected W16SQL2019A on which I changed it to Synchronous mode and failed over my AG from W12SQL2016A(Which is my current primary) to W16SQL2019A .

Awesome, At this point, W16SQL2019A took over the primary role all your databases participating in your AG have been upgraded to SQL 2019 and the other SQL 2019(W16SQL2019B in my case) Instance will be in sync from now on, but the two SQL 2014 Instances will be in unhealthy state, In fact those databases become inaccessible at this time, since Logs can’t be shipped from higher(2019) to lower(2014) version. Duh!!!!!….

Perfectoo! Also, I have my table and data intact, double perfectoo!

If you are curious, this is how the error log looked like. You can clearly see, the internal database version is getting upgraded from SQL 2014 all the way to 2019.

Below is a screenshot showing what to expect on old SQL instances after failing over AG to newer version.

Time to do some clean up now. I removed both SQL 2014 Instances from AG as replicas and boom……PRESTO!

The only thing left now is to take care of WSFC by evicting old windows server 2012 R2 nodes and raise the functional level of the cluster to come out of mixed mode.

That’s it folks. Hope this is helpful, Cheers!