How to change IP Addresses in an Always On Availability group.

In this blog post, let’s see how to change all the IP addresses involved in a typical Always on Availability group configuration. In my setup, I have an AG with two replicas and a listener. See below to get an idea of my current environment on which I am going to change all the underlying IP addresses.

1

2

Okay, so there you have it. Two IPs for my two nodes, One IP for my windows cluster and one IP for my availability group listener. Nothing fancy really….I am using host names in my endpoints, no dedicated IPs in my case.

Step 1: Change the IP addresses of both replicas.
For W16SQL2019A node, I am changing IP address from 192.168.1.182 to 192.168.1.151.
For W16SQL2019B node, I am changing IP address from 192.168.1.182 to 192.168.1.152.

Go to the NIC on the respective nodes and make the change, very straight forward.

3_node1IP

4

Step 2 : Change windows cluster core resource IP (Am changing from 192.168.1.185 to 192.168.1.153)
5

Right click, Go to properties and edit the IP address as shown below.

6

7

8

Well, That’s a nice little warning, once this is done, check the core cluster resource status and bring it online if it is in offline status at this point. Moving on…

Step 3: Change the IP address of the Always on availability group listener. (Am changing from 192.168.1.186 to 192.168.1.154)

Go to the AG in failover cluster manager, locate the server name in the bottom pane, right click and go to properties and change the static IP address as needed as shown below.

9 

10

Verify health status and validate in SSMS….
12

There we go, we have successfully changed the IP address of our Always on Availability group.

Things to remember:

  • Expect nodes loosing connectivity with each other when changing IP addresses and going offline as a result in Cluster.
  • The process remains same even if you are moving to a different subnet, Just make sure all the IPs involved are changed reflecting new subnet and cluster resource and all other things are showing online.
  • If you are using dedicated NICs for AOAG traffic, you should change the IP addresses for your Endpoints as well on all the replicas.


SQL Server Read Scale Availability Groups (AKA Cluster less AGs)

What are Read scale Availability Groups ? (Also referred as Cluster less availability groups many times)

In this blog post let’s see what read scale AGs are and few scenarios when they can be useful for us and how to actually create them. So what exactly is a read scale AG? Well, Microsoft introduced these AGs in SQL 2017 just to serve a single purpose which is to scale our read work loads. These AGs do not offer HA capabilities which we get with a typical AG deployment running on a failover cluster. Since there is no cluster behind the scenes, there is no concept of health checks and hence no automatic failovers. To keep it very simple, consider them as Not Always On AGs 🙂

Well, when can I use Read Scale AGs?

Think of a scenario where all you want is to just isolate your read workload for a given database(s) which doesn’t have any HA requirements and your users and applications are okay connecting directly to your secondary replica for read/reporting work loads, do you really need clustering here? If you are absolutely sure that your AG is not being used for making your database highly available, why to have a cluster and why deal with it?

Note: Read scale AGs do not offer high availability capabilities, This is not for your mission critical database(s) which require HA with automatic failover capabilities. If you have to throw HA in the mix, just stick to traditional AGs.

If not for HA, How about Disaster recovery?

we can certainly get the DR capabilities with Read scale AGs. The reason I say that is, we can have synchronous commit setup which pretty much guarantees the same RPO as a traditional AG would do. Even in a traditional setup when ever I think DR, it’s a manual failover for me….YMMV.

Aaaalright, let’s get into action. I have two standalone SQL Server 2019 Instances joined in the same domain which are not participating in any kind of clustering…you know, just two simple VMs joined to my domain running Windows/SQL, nothing fancy.

First thing first, I enabled AG feature by going to SQL config manager(I did this on both servers), you can see it says “This computer is not participating in a failover cluster”. Duuuh!

I have a database (Not_AO_DB) on which I want to scale my read/reporting work load. well, I went ahead and restored the database manually with norecovery on my second Instance as part of preparing the DB to join in my AG later on.

Nothing new so far right, Now….from SSMS, I choose to use AG creation wizard, gave my AG a name and notice the Cluster type selection in below screenshot, that should be NONE.

Proceed further and select the database(s) that you want to join in AG, I have “Not_AO_DB” database in my case.

Now notice the highlighted Failover mode, you can see there is only one option(Manual) which can’t be changed(Remember, no clustering means no built in Automatic failover). Set your desired Availability mode to either Synch or Asynch.

It is important to remember, All the good old rules for endpoints like TCP port communication and account permissions etc etc still needs to be in place. The actual SQL Server AG mechanics are same here with read scale AGs.

Time for rules check…Next…Next…Finish!

That’s it folks! That’s pretty much what it takes to create a read scale Availability group. Below is how my dashboard looked like after successful creation of my AG.

In the next post, let’s see how to failover a read scale availability group, Listener GOTCHAS and learn how it is different from a traditional AG.

Happy holidays!

Always On availability group(s) – Rolling upgrades

Aim: To upgrade/migrate (side-side) SQL Server 2014 Availability group(s) running on Windows Server 2012 R2 to SQL Server 2019 running on Win 2016 with the least amount of downtime.

Couple of years ago, I wrote a blog post explaining how to upgrade Windows OS from 2012 R2 to 2016 on nodes participating in fail over clustering with minimal downtime using rolling upgrade technique. In this blog post I will be sharing something similar but throwing SQL Server availability groups into the mix. So let me briefly explain what we are trying to achieve here.

Current environment:
I’ve a two node Failover cluster (Windows Server 2012 R2) hosting SQL Serevr 2014 Always On Availability Group with synchronous commit mode. I have a listener configured for my applications to connect. These replicas are running on the latest build of SQL 2014 as of the date this post is published.

As you can see, W12SQL2016A/B are my two replicas(Nodes) which are running Win2012R2+SQL 2014.

Originally I thought of Installing SQL 2016(hence the host names), but ended up installing SQL 2014 for now based on our specific requirement. I didn’t want to change the host names as I had my windows Fail over cluster all setup by this time and I really don’t want to deal with fixing any annoying errors that might popup because of messing up my host names of my nodes. Anyways…the bottom line is I have SQL 2014 AG running on Win 2012R2 which needs to be upgraded/migrated to SQL 2019 running on Windows 2016.


Listener Name: RestAGL

Goal:
To upgrade these SQL Instances to 2019  running on windows server 2016 with a very minimal downtime and no configuration changes for the App teams, assuming In-place upgrades are not allowed.

What’s the high level plan:

Take Full Backups.

  • Add W16SQL2019A and W16SQL2019B nodes to the same windows cluster leveraging mixed mode.
  • Install SQL 2019 and add these two nodes as replicas at SQL Server AOAG layer.
  • Join the databases and let the magic happen.
  • During the final cutover date/time, failover to SQL 2019 and remove the old replicas from AG.
  • Evict both windows 2012R2 nodes from the cluster and raise the functionality level to 2016.

Now, let’s see this in action one step at a time.

Below is the screenshot of all my SQL Instances which I will be working on. To begin with I have two brand new SQL Server 2019 standalone Instances(W16SQL2019A and W16SQL2019B), on which I just enabled HADR feature.

Let’s go, I added the new Win2019 nodes to the existing windows failover cluster which is running on 2012 R2 functionality level.

Note: You don’t want to run in mixed mode of WSFC for long periods. Microsoft might not support if you stay in mixed mode for more than 4 weeks. This is only to perform rolling upgrades to make your systems really highly available. Wrap up the entire process in a day or two and be done with it.

This is expected.  For more details on this, hop on to the blog post that I provided in the beginning of this blog post.

Now it’s time to jump into SQL Server to add these servers as replicas into our AG.

Awesome, so far so good 🙂

Let’s move on….Connecting to one of the SQL 2019 instances, below is what I have. Oops!!


I had to join the individual databases manually to the AG from both the newly added SQL 2019 replicas. Once that is done, below is how my dashboard looked like.

Also, I changed the failover mode to manual just to make sure cluster has no control over failing over my AG. I want to have total control over how and when to failover my AG till the entire upgrade process is complete. Hey BTW, did you take Full Backups?

Did I mention, I have a table called “McD” in “American” database with one row in it? See below…

Now comes the fun part. Set one of the SQL 2019 Instances availability mode to Synchronous commit and perform a controlled manual failover. In my case, I selected W16SQL2019A on which I changed it to Synchronous mode and failed over my AG from W12SQL2016A(Which is my current primary) to W16SQL2019A .

Awesome, At this point, W16SQL2019A took over the primary role all your databases participating in your AG have been upgraded to SQL 2019 and the other SQL 2019(W16SQL2019B in my case) Instance will be in sync from now on, but the two SQL 2014 Instances will be in unhealthy state, In fact those databases become inaccessible at this time, since Logs can’t be shipped from higher(2019) to lower(2014) version. Duh!!!!!….

Perfectoo! Also, I have my table and data intact, double perfectoo!

If you are curious, this is how the error log looked like. You can clearly see, the internal database version is getting upgraded from SQL 2014 all the way to 2019.

Below is a screenshot showing what to expect on old SQL instances after failing over AG to newer version.

Time to do some clean up now. I removed both SQL 2014 Instances from AG as replicas and boom……PRESTO!

The only thing left now is to take care of WSFC by evicting old windows server 2012 R2 nodes and raise the functional level of the cluster to come out of mixed mode.

That’s it folks. Hope this is helpful, Cheers!

Simulating a Multi Subnet cluster for setting up SQL Server Always On Availability Groups – lab setup

In this article, we are going to see how to create a multi subnet cluster spanning across multiple subnets for lab purposes. Creating such an environment should help creating Availability groups simulating a far replica residing in a different Data Center (Subnet) acting as a disaster recovery site in your lab for learning/experimenting real world scenarios.

Let’s get into action, below is what we are going to setup for our lab purposes to simulate a multi subnet cluster environment and create an Always On Availability group.

  • Two nodes (Replicas) residing in my production/primary Data center (Subnet 192.168.1.x) – Let’s say these two replicas are used for High availability, assuming we have low latency, since they are residing in the same data center (Probably in the same rack), I will set them up with Synchronous Mode with Automatic Failover
  • Third node (Replica) sitting in my disaster recovery data center which is geographically dispersed (Subnet 192.168.2.x) – Assuming it’s connected over WAN and is used just for DR purposes, I would set this replica in Asynchronous Mode with Manual Failover

For more reading…go to below link.

Simulating a Multi Subnet cluster for setting up SQL Server Always On Availability Groups – lab setup

AOAG – SSMS 17.4…Surprise!

Howdy folks! So, I was installing SQL Server 2017 on my new Cluster lab setup and found something interesting with SSMS 17.4 when creating AG. Before going there, let me show you guys what has been changed with SQL 2017 setup.

Note: This is not a step by step setup walk through.

As you can see there is no option to install SSMS/SSRS from original SQL media.

Well, so I downloaded the latest and greatest version of SSMS as of today(which is V17.4).

Note: On my server, SQL DB Engine installation was completed in 3 minutes, where as it took about 17 minutes to Install SSMS!!

Well, now I was all set to create an Availability Group(from SSMS) and found something interesting in the wizard(It has a Read-Only Routing tab 🙂 ). Not sure when this was introduced, but initial releases of SSMS 2017 didn’t had this IIRC.

If you are curious, Prefer secondaries is the default backup setting.

So, below is the screenshot of how it looks like if we navigate to ROR tab. We can setup load balancing as well right here from this wizard which is an awesome addition IMHO.

We are not going to create routing lists here in this blog post, I will show you how to do that using GUI in one of my future posts.