Install and open/run Azure Data Studio in CentOS

In this blog post let’s see how to install Azure data studio in CentOS and where/how to locate the executable if you are unable to find in GUI. For this demo, I am using CentOS 7 as my operating system. Installing ADS in CentOS is pretty straight forward process as detailed in Microsoft’s documentation here. My personal preference is to download rpm package and issue “yum install”. See below for screenshot tour of ADS installation on my machine.

sudo yum install azuredatastudio-linux-1.18.1.rpm

Where is Azure Data Studio in CentOS?
How to open Azure Data Studio in CentOS?

Okay…now what? Where to locate the executable and how do I open Azure data studio in CentOS? GUI in CentOS is not as user friendly as you can see in some other Linux OSs (Mint for eg or few other Ubuntu flavors of Linux). In windows you can locate the program in start menu or even in few desktop experience Linux distributions it’s extremely easy to just search in application center, but that was not the case for me in CentOS 7.

Well, I am going to use “which” to locate azuredatastudio executable/command.

As you can see it’s already in my /usr/bin with a symlink to its original location. Okay, so all I have to do is type “azuredatastudio” in my console from where ever I want to and voilaaaaaaaaa 🙂

Happy Monday!

How to install MongoDB in RHEL & in Docker Container.

In this blog post, lets see how to setup and configure MongoDB in Linux environment (I am using CENTOS in my lab, but the steps to follow are exactly same for RHEL) and also how to spin up a MongoDB container(s), I will be using Dockers for all my containers.

Setup MongoDB on a Linux machine

Download the mongoDB repository file or create it manually in yum.repos.d folder with the content as shown below.

Go to the official MongoDB documentation for more specific details on how to get a specific release or how to pin a specific release.
Now that I’ve my repository configured, All I have to do is to run below command.

sudo yum install -y mongodb-enterprise


Great, At this point I’ve installed MongoDB enterprise on my machine, but it is in stopped state as shown below and I had to bring it online.

That is all it takes to setup MongoDB on a Linux machine with default values. In most of the cases we may want to change the default paths etc per requirements and standards in place. Well, let’s check them out and customize per our needs…

Default directories:
/var/lib/mongo – This is where your database files will be created.
/var/log/mongodb – This is where your log file(s) will be created.
Note: These are Error log files, don’t get confused with log files term that we use in our SQL Server world 🙂 In Mongo, write ahead logging is achieved by something called journaling and journal files will be created in a separate folder under “/var/lib/mongo”
The Config File:
/etc/mongod.conf
This is the file which holds all the paths, configuration settings, parameter values and what not. There are hundreds of things that can be configured, MongoDB official documentation is your best friend here.
For example, To setup a data  and/or log directory other than the default directories, you can just create the directories and specify them in /etc/mongod.conf file.
storage.dbPath to specify a new data directory path (e.g. /mounts/datadir/directory)
systemLog.path to specify a new log file path (e.g. /mounts/logs/mongo/mongod.log)
Note: The user(“mongod” is the default user/group) running MongoDB must have access to these directories.  You can make it as owner of your mongodb folders by running below command.
chown -R mongod:mongod /your/dbpath/

Create a MongoDB Docker container

Now let’s see how to spin up a MongoDB container in docker environment.  First step is to pull the required image locally and then build the container. I am using a brand new machine with no images downloaded so far. So, to begin with I searched for the official MongoDB image and issued a docker run command as shown below.
Note: Docker run will also take care of pulling the image if you don’t have it. Or you can issue docker pull and then a docker run….It’s up to you.
Command:
docker run -d -p 27017:27017 -v data: /mongo/data/ mongo
-d : detached mode, to get control back of my current console window (Optional).
-p : mapped local port 27017 on my host to mongodb def port 27017.
-v: created a persistent volume (Optional).
And you know what, we are done! That’s all it takes….you have everything ready in less than a minute 🙂
To connect to my mongodb container from this host, All I had to do was issue “mongo” and voila I am connected, since I am using default port of 27017.
Note: I have mongodb client tools already installed on this machine.
Let’s say I want to create few more Instances of MongoDB on this machine, all I have to do is leverage docker and spin up containers listening on different ports as shown below.
docker run -d -p 27018:27017 -v data: /mongo1/data/ mongo
docker run -d -p 27019:27017 -v data: /mongo2/data/ mongo
Cheers!

How to install docker engine in CentOS 8

In this short blog post, Let’s see how to install Docker engine in CentOS. I thought it would be pretty straight forward process of issuing a command or two and be done with it, but I ran into a small hurdle that I would like to share.

So, I was following official documentation(https://docs.docker.com/engine/install/centos/) from Docker for installing on my CentOS(8) machine. As you can see below, I had my repo all setup and ready to go.

When I tried to install as per documentation, I ran into dependency issues for containerd.io package (CentOS 8 as of the day of this writing doesn’t support few specific versions of containerd.io package).

Error:
Problem: package docker-ce-3:19.03.9-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed.

We have couple of work arounds here…
Install CentOS 7 containerd.io package manually and then install docker-ce engine/upgrade and voila. ( This is what I am going to show you guys below)
or
use –nobest option as suggested by yum in order to install appropriate version of containerd.io package.

Alright, now it’s time to manually download and install containerd.io package. Get the latest stable package details for CentOS 7 from here:
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

sudo yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm

Great, now it’s time to install docker-ce package which also takes care of dependent docker-ce-cli package.

Hurray! Now it’s time to start the docker engine.

Perfectoooooooo……My CentOS machine is all set for hosting linux based containers.

Happy containerization folks!

Cannot connect to WMI provider-SQL Server configuration manager.

On one of our SQL Servers, I was planning to make some changes to network configurations and I was welcomed with below error message when I try to open SQL config manager. FYI…This is a SQL Server 2016 Instance running on windows 2016 machine.

cannot connect to wmi provider. you do not have permission or the server is unreachable. note that you can only manage sql server 2005 and later servers with sql server configuration manager.invalid class [0x80041010]

This error mesage is pretty misleading IMHO at a high level, I am a local admin on the box and I am trying to manage a SQL 2016 Instance 🙂

Before proceeding to the final resolution, I would like you folks to read these statements from microsoft documentation.

“The Managed Object Format (MOF) compiler parses a file containing MOF statements and adds the classes and class instances defined in the file to the WMI repository. MOF files are usually automatically compiled during the installation of the systems with which they are provided, but you can also compile MOF files by using this tool.”

“To make the contents of a MOF file effective (by placing them in the CIM Repository), the file must be compiled. MOF files are usually automatically compiled during the installation of the systems with which they are provided, but you can also compile MOF files by using the MOF Compiler (Mofcomp.exe). The MOF Compiler is available in the %Windir%\System32\wbem directory. You must specify the MOF file as the parameter of the MOF Compiler.”

References:
https://docs.microsoft.com/en-us/windows/win32/wmisdk/compiling-mof-files
https://docs.microsoft.com/en-us/windows/win32/wmisdk/mofcomp?redirectedfrom=MSDN

In my case, For whatever reason, SQL Server installer coudn’t compile and register .mof files correctly. I am not sure if this happened during some patching or when SQL was installed initially on this box, I have no idea.

Resolution:
Navigate to shared features folder from cmd (as admin) and run below command:
mofcomp sqlmgmproviderxpsp2up.mof

In my case since it’s SQL 2016, the path for “sqlmgmproviderxpsp2up.mof ” file is
C:\Program Files (x86)\Microsoft SQL Server\130\Shared.
For SQL 2019, that would be:
C:\Program Files (x86)\Microsoft SQL Server\150\Shared.

Anyways…once the mof file got parsed successfully, I was able to open and manage my SQL Instance without any issues using SQL Server Config manager.

Always On availability group(s) – Rolling upgrades

Aim: To upgrade/migrate (side-side) SQL Server 2014 Availability group(s) running on Windows Server 2012 R2 to SQL Server 2019 running on Win 2016 with the least amount of downtime.

Couple of years ago, I wrote a blog post explaining how to upgrade Windows OS from 2012 R2 to 2016 on nodes participating in fail over clustering with minimal downtime using rolling upgrade technique. In this blog post I will be sharing something similar but throwing SQL Server availability groups into the mix. So let me briefly explain what we are trying to achieve here.

Current environment:
I’ve a two node Failover cluster (Windows Server 2012 R2) hosting SQL Serevr 2014 Always On Availability Group with synchronous commit mode. I have a listener configured for my applications to connect. These replicas are running on the latest build of SQL 2014 as of the date this post is published.

As you can see, W12SQL2016A/B are my two replicas(Nodes) which are running Win2012R2+SQL 2014.

Originally I thought of Installing SQL 2016(hence the host names), but ended up installing SQL 2014 for now based on our specific requirement. I didn’t want to change the host names as I had my windows Fail over cluster all setup by this time and I really don’t want to deal with fixing any annoying errors that might popup because of messing up my host names of my nodes. Anyways…the bottom line is I have SQL 2014 AG running on Win 2012R2 which needs to be upgraded/migrated to SQL 2019 running on Windows 2016.


Listener Name: RestAGL

Goal:
To upgrade these SQL Instances to 2019  running on windows server 2016 with a very minimal downtime and no configuration changes for the App teams, assuming In-place upgrades are not allowed.

What’s the high level plan:

Take Full Backups.

  • Add W16SQL2019A and W16SQL2019B nodes to the same windows cluster leveraging mixed mode.
  • Install SQL 2019 and add these two nodes as replicas at SQL Server AOAG layer.
  • Join the databases and let the magic happen.
  • During the final cutover date/time, failover to SQL 2019 and remove the old replicas from AG.
  • Evict both windows 2012R2 nodes from the cluster and raise the functionality level to 2016.

Now, let’s see this in action one step at a time.

Below is the screenshot of all my SQL Instances which I will be working on. To begin with I have two brand new SQL Server 2019 standalone Instances(W16SQL2019A and W16SQL2019B), on which I just enabled HADR feature.

Let’s go, I added the new Win2019 nodes to the existing windows failover cluster which is running on 2012 R2 functionality level.

Note: You don’t want to run in mixed mode of WSFC for long periods. Microsoft might not support if you stay in mixed mode for more than 4 weeks. This is only to perform rolling upgrades to make your systems really highly available. Wrap up the entire process in a day or two and be done with it.

This is expected.  For more details on this, hop on to the blog post that I provided in the beginning of this blog post.

Now it’s time to jump into SQL Server to add these servers as replicas into our AG.

Awesome, so far so good 🙂

Let’s move on….Connecting to one of the SQL 2019 instances, below is what I have. Oops!!


I had to join the individual databases manually to the AG from both the newly added SQL 2019 replicas. Once that is done, below is how my dashboard looked like.

Also, I changed the failover mode to manual just to make sure cluster has no control over failing over my AG. I want to have total control over how and when to failover my AG till the entire upgrade process is complete. Hey BTW, did you take Full Backups?

Did I mention, I have a table called “McD” in “American” database with one row in it? See below…

Now comes the fun part. Set one of the SQL 2019 Instances availability mode to Synchronous commit and perform a controlled manual failover. In my case, I selected W16SQL2019A on which I changed it to Synchronous mode and failed over my AG from W12SQL2016A(Which is my current primary) to W16SQL2019A .

Awesome, At this point, W16SQL2019A took over the primary role all your databases participating in your AG have been upgraded to SQL 2019 and the other SQL 2019(W16SQL2019B in my case) Instance will be in sync from now on, but the two SQL 2014 Instances will be in unhealthy state, In fact those databases become inaccessible at this time, since Logs can’t be shipped from higher(2019) to lower(2014) version. Duh!!!!!….

Perfectoo! Also, I have my table and data intact, double perfectoo!

If you are curious, this is how the error log looked like. You can clearly see, the internal database version is getting upgraded from SQL 2014 all the way to 2019.

Below is a screenshot showing what to expect on old SQL instances after failing over AG to newer version.

Time to do some clean up now. I removed both SQL 2014 Instances from AG as replicas and boom……PRESTO!

The only thing left now is to take care of WSFC by evicting old windows server 2012 R2 nodes and raise the functional level of the cluster to come out of mixed mode.

That’s it folks. Hope this is helpful, Cheers!