This Catalog is the index for different blog posts in structured order

. Azure.Pricing:

. Azure.Powershell :
Frequently used Azure SQL Powershell Commands:
Point in Time Restore on Deleted DB:

. Azure.SQL
. AZURE.SQL.Frequently User SQL Commands :


Best Practices:





. SQLServer.FrequentlyUsedCommands:

. SQLServer.Patch
SQL Server Patches:

. SQLServer.SSMS 
Object Explorer Details:


Posted in Azure, Powershell, SQL, Uncategorized | Tagged , | Leave a comment

Fix: SQL Server – using X logical processors based on SQL Server licensing, not using all processors

I had an experience where a server with 36 cores is using only 20 cores for SQL Server 2012.

It was identified that Enterprise Edition is being used which limit SQL to use only 20 CPUs.

Research indicates that the version can be upgraded with minimal impact but remember the server will be rebooted immediately after the upgrade.


Posted in Uncategorized, SQL, Productivity, SQL Server | Tagged , , , | Leave a comment

Search Windows Server patches using KBXXXX

Server using KB number to find the patch information and download link.


Posted in Productivity, Uncategorized | Tagged , , , | Leave a comment

What is azure sql database elastic pool split instance problem

Azure sql database elastic pool split instance problem is a very rare phenomena which could impact resource usage metrics of elastic pool.

In this post I will cover how resource metrics are impacted, sample monitor, example of an impact.

How resource metrics are impacted:
Master..elastic_pool_resource_stats view indicates resource usage metrics of all elastic pools in the server and for each elastic pool each row indicates resource utilization metrics for every 15 seconds.
Any given point in time must be represented in a single row but when the pool is impacted there will be 2 rows indicating different metric values for same point in time.

In below example, impacted/unhealthy elastic pool has 2 rows covering point in time 2019/05/17 06:33:20; One of them indicating 61.76% storage utilization and other indicating 0.06% of storage utilization.

Sample SQL:

select  top 100 *
FROM [sys].[elastic_pool_resource_stats]
WHERE end_time > dateadd(dd,-7,getdate())
AND ( elastic_pool_name = 'SampleElasticPool')
order by start_time desc 


Impacted elastic pool metrics sample:

Expected healthy pool metrics Sample:

Sample Monitor:
Validate number of metrics collected against each elastic pool every 15 minutes. There should be 60 records every 15 minutes, when the pool is impacted we see ~120 or more metrics every 15 minutes.

— Execute followign SQL against master database, to monitor impact in last 2 hours

(select  elastic_pool_name, datepart(month,end_time) * 24 * 31 * 4 + datepart(dd,end_time) * 24 * 4 + datepart(hh,end_time) *4 + datepart(mi,end_time) / 15 as TimeID, max(end_time) as MaxDateTime, count(*) as MetricCountin15Minutes
FROM [sys].[elastic_pool_resource_stats]
WHERE end_time > dateadd(hh,-2,getdate())
group by elastic_pool_name,  datepart(month,end_time) * 24 * 31 * 4 + datepart(dd,end_time) * 24 * 4 + datepart(hh,end_time) *4 + datepart(mi,end_time) / 15
select * from CTE1 
where MetricCountin15Minutes > 100
ORDER BY elastic_pool_name,TimeID DESC 

Example of impact:
Imagine an automated process which validates available storage and adds additional databases to elastic pool. If automated process executes at 2019/05/17 06:33:25, adds more databases assuming only 0.06% storage is used but 62% of storage is utilized on the pool resulting unexpected behavior.

Additional information:
This problem is related to metric collection only, Databases will be available to use during the impact.
This phenomena is common during reconfiguration of elastic pools but will disappear very quickly, most of the time with 0 impact.
If the issue occurs for hours or days, it is beneficial to engage Microsoft.
The possibility of this issue occurrence is very low and impact is very low unless you have more automation in place.
This issue was first identified in 2017, Microsoft fixed it but resurfaced in mid-2019 at least once.

Most frequently used Azure SQL database commends:

Posted in Azure, knowledge, Productivity, SQL, SQL Query, SQL Server, Uncategorized | Tagged , , , , , , | Leave a comment

T-SQL to find High availability configuration of all databases in MSSQL Server

 TSQL to find list of all databases in MSSQL Server with Always-ON and mirroring configuration.

-- TSQL to find the High availability configuration of all databases 
-- Created by Vijay 
-- Created on 4/25/2019 

IF OBJECT_ID('tempdb..#DatabaseInfo') IS NOT NULL
    DROP TABLE #DatabaseInfo

	ISNULL(ar.replica_server_name,@@SERVERNAME) AS AG_replica_server_name, AS database_name,
	ars.role_desc AS AG_role_desc, AS ag_name, 
	-- drs.is_local AS ag_is_local, 
	drs.synchronization_state_desc AS AG_synchronization_state_desc, 
	drs.is_commit_participant AS AG_is_commit_participant, 
	availability_mode_desc AS AG_availability_mode_desc,
	failover_mode_desc AS AG_failover_mode_desc,
	INTO #DatabaseInfo
FROM sys.dm_hadr_database_replica_states AS drs
INNER JOIN sys.availability_databases_cluster AS adc 
	ON drs.group_id = adc.group_id AND 
	drs.group_database_id = adc.group_database_id
INNER JOIN sys.availability_groups AS ag
	ON ag.group_id = drs.group_id
INNER JOIN sys.availability_replicas AS ar 
	ON drs.group_id = ar.group_id AND 
	drs.replica_id = ar.replica_id
INNER JOIN sys.dm_hadr_availability_replica_states  AS ars 
	ON ars.replica_id = drs.replica_id AND ars.group_id = drs.group_id
RIGHT JOIN sys.databases d ON = adc.database_name
LEFT JOIN sys.database_mirroring  dm on dm.database_id = d.database_id
WHERE not in ('master', 'tempdb', 'model', 'msdb')
AND ISNULL(drs.is_local,1) = 1 

SELECT 'AllDatabaseConfiguration' as Tableinfo, * FROM #DatabaseInfo 

SELECT 'StandaloneDatabases on Server' as Tableinfo_Standalone, * FROM #DatabaseInfo 
WHERE AG_role_desc IS NULL AND mirroring_role_desc IS NULL
ORDER BY database_name

SELECT 'Availability Group Manual Failover configuration' as Tableinfo_AG_ManualFailover, * FROM #DatabaseInfo 
WHERE AG_failover_mode_desc = 'MANUAL'

SELECT 'Availability Group Automatic Failover configuration' as Tableinfo_AG_AutomaticFailover, * FROM #DatabaseInfo 
WHERE AG_failover_mode_desc = 'AUTOMATIC'


SELECT 'Mirroring databases - Manual action required during DR' as Tableinfo_MirroringDatabases, * FROM #DatabaseInfo 
WHERE 	mirroring_role_desc is NOT NULL

select DISTINCT ISNULL(replica_server_name,@@SERVERNAME) AS Servers_InCluster from sys.availability_replicas
SELECT DISTINCT ISNULL(mirroring_partner_instance,@@SERVERNAME) AS Servers_InCluster FROM sys.database_mirroring

IF OBJECT_ID('tempdb..#DatabaseInfo') IS NOT NULL
    DROP TABLE #DatabaseInfo

#mssql#tsql #AlwaysOn #Mirroring #FailoverMode #HighAvailability

Posted in Productivity, SQL, SQL Query, SQL Server, Uncategorized | Tagged , , , , | Leave a comment

Add-on storage (Max storage choices) for Azure SQL Elastic Pools/ Databases – Great cost saving opportunity

If you have been using Azure SQL Elastic pools or databases (Platform As A Service – PASS), I strongly recommend to check your workload and validate Max storage choices!

What change:
Traditionally Each Azure Database and elastic pool had fixed storage in each tier and capacity. Example, If you have to host 300 GB database with very minimum CPU and IO requirements you have to choose a Premium tier which is ~145% higher cost to Standard Tier for the same DTU.
New Feature, Add-On Storage is added which allowed additional storage for databases and elastic pools with same capacity of CPU, IO, and Memory. Add-On storage is costs ~8% of cost for standard elastic pools. This feature is very stable and converted as (Max storage choices).

How to save cost:
If Database/Elastic pool is not effectively using its DTU (CPU/IO/Worker Threads) bring down the database/elastic pools to lower capacity using Max storage choices.
Example: 400 eDTU standard pool with 400 GB storage and using maximum of 50% DTU; Bring down to 200eDTU with 400 GB as max storage, this saves cost by 46%!

How to determine if you can gain cost saving with this feature?
Depends on workload and business requirements; I will recommend a few metrics to think about:

  • Average eDTU: Average is not a good baseline to define if this can be downgraded but this can be used as comparing factor.
  • Max eDTU: good measure for mission critical business scenarios. Decisions made of Max eDTU are more reliable.
  • Percentile: This is my favorite measure. Use case: I have a business case which has 30 minute maintenance window every week, I run something which uses 100% CPU for 5 minutes. Lowering the capacity increases my maintenance time to 10 minutes within maintenance window. I prefer to ignore 6 minutes a week during my analysis; 99.94 Percentile value ignores peak 6 minute metric and picks max value from different window;
  • Weekly Average (Max eDTU of each Day): I have no business case but I used it at times;


Few Examples (for reference only):

  • 2 Standard 400 eDTU elastic pools with maximum eDTU consumption of 42%, I will merge both of them into single elastic pool and delete second pool.
  • 3 elastic pools with 99.98 Percentile value below 52, I will balance the load from 3rd pool to 2 other pools and remove third elastic pool.
  • 12 pools with uneven load distribution: Move all databases from one of the pools to remaining pools based on load and repeat the process until better load balancing found. At least I recommend to move few databases at a time and eliminate one of the pool.

There is always unknown factor with few metrics not being exposed, Test your application thoroughly before changing it in production. Example, DataIO Throughput is 20 times more on premium pool compare to standard pool. Lowering from Premium to Standard could impact your database.


Useful links:

Posted in Azure, Productivity, SQL, Uncategorized, Useful Link | Tagged , , , , , , , , | Leave a comment

Visual Studio Code – What I liked!

Visual Studio Code is the new code editor from Microsoft. Following are few reasons why started using it and liking it!

* Cross Platform availablity: VS Code is available across Windows, Linux, and mac. This provides same experience using different OS
* Intellisense is rich and supports multiple languages – At leaast 45 different languages like SQL, Powershell, R, Dockerfile, C#, Python, Java Script.
* Version control is very helpful. I use github, I stopped using command line but depending on vscode now. Started using it for File comparison.
* Single place to manage almost all my text files. I started using VSCode for SQL, Powershell, BatchFiles, C# , Python and all other files.
* Debug features are very helpful
* Extensions are Abundant verify @
* Configuration: Too many features to configure, customize the way you like!

Other useful information:
* Key board shortcuts: (Windows)

Start using it, reference to start: @

Posted in Productivity, Referene, Uncategorized, Visual Studio | Tagged , , , | Leave a comment

SQL Recursive CTE example

* Recursive CTE

  SELECT empid, mgrid, firstname, lastname
  FROM HR.Employees
  WHERE empid = 2
  SELECT C.empid, C.mgrid, C.firstname, C.lastname
    JOIN HR.Employees AS C
      ON C.mgrid = P.empid
SELECT empid, mgrid, firstname, lastname
Posted in Script, SQL, SQL Query | Tagged , , , , , , | Leave a comment