Sunday, May 20, 2012

Revisiting LEFT OUTER JOIN


It’s time to revisit our old friend LEFT OUTER JOIN.
You ask “why???”
I say because it can give you unexpected results (rows) if you are not careful about where you put the conditions.
Let’s first create a few tables

--Stores Product Data (we are going to store different cars)
CREATE TABLE [dbo].[Product](
      ProductID INT IDENTITY(1,1)  PRIMARY KEY
      ,  Name VARCHAR(200)
      , Description VARCHAR(MAX)
)

--Stores Parts of different cars
CREATE TABLE [dbo].[Part](
      PartID INT IDENTITY(1,1) PRIMARY KEY
      , Name VARCHAR(30)   
)

--Stores relationship between Product and its Parts
CREATE TABLE [dbo].[ProductPart](
      ProductID INT FOREIGN KEY REFERENCES Product(ProductID)
      , PartID INT FOREIGN KEY REFERENCES Part(PartID)
)

And now populate these tables with some data

INSERT INTO Product(Name, Description)
VALUES('Car', 'it will take you to work'), ('Sports Car', 'it will make u feel great'), ('Imaginary Car', 'it flys and stays in your dreams')


INSERT INTO Part(Name)
VALUES('Wheel'),('Door'), ('Boot'), ('55 inch tv'), ('Speed'), ('Thrill')


INSERT INTO ProductPart(ProductID, PartID)
VALUES(1,1),(1,2),(1,3),(2,1),(2,2),(2,5),(2,6)


Let us examine what the left outer join really means based on the conditions:

SELECT DISTINCT P.Name AS Product
    , Pt.Name AS Part
FROM Product P
LEFT OUTER JOIN ProductPart PPT
ON P.ProductID = PPT.ProductID
LEFT OUTER JOIN Part PT
ON PPT.PartID = PT.PartID
AND PT.Name = 'Thrill'

In the above query you are trying to get names of all the products and the products which have the thrill part.
when you run this query the following result is returned


now if you modify the above query a little bit and place the condition in the where clause:

SELECT DISTINCT P.Name AS Product
    , Pt.Name AS Part
FROM Product P
LEFT OUTER JOIN ProductPart PPT
ON P.ProductID = PPT.ProductID
LEFT OUTER JOIN Part PT
ON PPT.PartID = PT.PartID
WHERE
      PT.Name = 'Thrill'




If you can guess it, this query even though is written using LEFT OUTER JOIN, actually its an INNER JOIN because of the condition in the where clause. The result of the above query is 



If you look at the execution plan, the query optimizer has actually run this as an INNER JOIN 


lets modify the above query a little bit 

SELECT P.Name AS Product
    , Pt.Name AS Part
FROM Product P
LEFT OUTER JOIN ProductPart PPT
ON P.ProductID = PPT.ProductID
LEFT OUTER JOIN Part PT
ON PPT.PartID = PT.PartID
WHERE
     PT.Name IS NULL OR PT.Name = 'Thrill'



When you run this query, because we have specified a NULL condition in the where clause, the query optimizer will run it as a LEFT OUTER JOIN. As expected it will return all the cars which have either no parts specified or one of its parts is Thrill


So its really important that you put the condition at the right place (depending on your needs).
The purpose of this article was just to brush up on the LEFT OUTER JOIN. 
If you have any questions/comments please don't hesitate to comment.



   




Monday, April 23, 2012

Dynamically create a temp table with variable number of columns

While programming with sql we usually come across a situation where we want to pass data to stored procedure where the data is actually a string concatenation of different values and the stored procedure would construct a table out of this string to perform relational operations. Let’s look at one such scenario.
Scenario:
                Say you have the following tables
CREATE TABLE [dbo].[Product](
ProductID INT IDENTITY(1,1)  PRIMARY KEY
,  Name VARCHAR(200)
, Description VARCHAR(MAX)
)

CREATE TABLE [dbo].[Part](
      PartID INT IDENTITY(1,1) PRIMARY KEY
      , Name VARCHAR(30)     
)

CREATE TABLE [dbo].[ProductPart](
      ProductID INT FOREIGN KEY REFERENCES Product(ProductID)
      , PartID INT FOREIGN KEY REFERENCES Part(PartID)
)

Let’s populate these tables with test data
INSERT INTO Product(Name, Description)
VALUES('Car', 'it will take you to work')


INSERT INTO Part(Name)
VALUES('Wheel'),('Door'), ('Boot'), ('Monitor')
These tables will now have the following data

Let us suppose you want to store the data into ProductPart table. You want to store the information where you want to say a car has wheels, Doors and a Boot.

Traditionally you will have to write a stored procedure where it will take partID and ProductID as parameters and saves it into the ProductPart table one at a time.
It would look like this
CREATE PROCEDURE [dbo].[ProductPartSave]
(
      @ProductID INT
      , @PartID INT
)
AS
INSERT INTO [ProductPart]
SELECT @ProductID, @PartID

It will work fine. Except that you will have to make three calls to the database in our case.
There is one more way. You can create a stored procedure that will take two parameters. The first parameter would be the @ProductID and the second @PartIDs.



Definition of our new stored procedure is

CREATE PROCEDURE [dbo].[ProductMultiplePartsSave]
(
      @ProductID INT
      , @PartIDs VARCHAR(MAX)
)
AS

DECLARE @Parts TABLE(PartID INT)

INSERT INTO @Parts(PartID)
EXEC [GetDynamicTable] @DataList = @PartIDs, @ColumnSeperator = ',', @RowSeperator = '|', @NumberOfColumns = '1'

INSERT INTO [ProductPart](ProductID, PartID)
SELECT @ProductID
      , PartID
FROM @Parts

As you can see, we have a used a stored procedure called [dbo].[GetDynamicTable] to get a table from string.
The code for this sp is

CREATE PROCEDURE [dbo].[GetDynamicTable]
(
      @DataList AS VARCHAR(MAX),
      @ColumnSeperator AS CHAR(1),
      @RowSeperator AS CHAR(1),
      @NumberOfColumns INT
)
AS   
DECLARE @DynamicTableSQL NVARCHAR(MAX)
DECLARE @ColumnCount INT
DECLARE @Row NVARCHAR(MAX)


SET @DynamicTableSQL = 'DECLARE @DynamicTable TABLE('
SET @ColumnCount = 1
WHILE @ColumnCount <= @NumberOfColumns
BEGIN
      SET @DynamicTableSQL = @DynamicTableSQL + 'Column' + CAST(@ColumnCount AS VARCHAR(10))+ ' VARCHAR(MAX) ,'
      SET @ColumnCount = @ColumnCount + 1
END
SET @DynamicTableSQL = SUBSTRING(@DynamicTableSQL, 0, LEN(@DynamicTableSQL))
SET @DynamicTableSQL = @DynamicTableSQL + ') '

SET @DataList = REPLACE(@DataList, '''', '''''')
     
SET @DynamicTableSQL = @DynamicTableSQL + ' INSERT INTO @DynamicTable '
     
SELECT @DataList = @DataList + @RowSeperator

DECLARE @PosA INT, @PosB INT, @PosC INT

SELECT @PosA = 1
SELECT @PosC = CHARINDEX(@RowSeperator, @DataList, @PosA)

SET @Row = SUBSTRING(@DataList, @PosA, @PosC - @PosA)      

IF (LEN(@Row) > 0)
BEGIN
      SET @Row = REPLACE( @Row, @ColumnSeperator, ''',''')
      SET @DynamicTableSQL = @DynamicTableSQL + 'SELECT ''' + @Row + ''' '   
END

SELECT @PosA = @PosC + 1
SELECT @PosC = CHARINDEX(@RowSeperator, @DataList, @PosA)


WHILE @PosC > 0
BEGIN
     
      SET @Row = SUBSTRING(@DataList, @PosA, @PosC - @PosA)
      IF (LEN(@Row) > 0)
      BEGIN
            SET @DynamicTableSQL = @DynamicTableSQL + ' UNION ALL '
           
            SET @Row = REPLACE( @Row, @ColumnSeperator, ''',''')
            SET @DynamicTableSQL = @DynamicTableSQL + 'SELECT ''' + @Row + ''' '               
      END
     
      SELECT @PosA = @PosC + 1
      SELECT @PosC = CHARINDEX(@RowSeperator, @DataList, @PosA)
END
     
SET @DynamicTableSQL = @DynamicTableSQL + ' SELECT * FROM @DynamicTable '

EXEC SP_EXECUTESQL @DynamicTableSQL

The above stored procedure takes four parameters
1.       @DataList a string containing the concatenated data
2.  @ColumnSeperator a character which is the column delimiter
3.  @RowSeperator a character which is the row delimiter
4.       @NumberOfColumns – number of columns the dynamic table should have

Let’s first see how we can insert values into ProductPart table using our new stored procedures.

As you can see, we inserted multiple rows into ProductPart table with just one call to the [ProductMultiplePartsSave] stored procedure.


[dbo].[GetDynamicTable] is very powerful in terms of creating temp tables on the fly with just change of parameters. Let’s look at a few different uses of this sp:
In the following image you can see how it can be used to get a two column table

In the following image you can see it returning a table with 5 columns

I hope this article was informative and please don’t hesitate to comment or ask questions.

Tuesday, January 3, 2012

Finding out where (in code) the connections to sql server are not getting closed

We had a problem where lots of connections were being created to the SQL Server and it just kept going up. The web application is just too big to manually check all the code pages to see where the connections were not getting closed properly.

(if you are using .net it’s better to use ‘USING’ so the connection is properly disposed)

I wanted to find out what query was run on the connections that got opened and didn’t get closed. That way I could check where in code the sql was getting called and make sure the connection was getting closed there.

For this we are going to use the dynamic management view dm_exec_connections and dynamic management function dm_exec_sql_text.

  • dm_exec_connections returns information about the established connections to this sql server instance. The column that we are most interested in is the most_recent_sql_handle, as this would have an handle to the sql that was run last before the connection was abandoned.
  • dm_exec_sql_text is a table valued function which takes a sql_handle(or plan_handle) parameter and returns the database id and text(this is null if the if the object is encrypted) among other things.

The following query will give you list of connections on which the read or write was done 5 mins ago along with the last sql that was run on it.

SELECT *
FROM sys.dm_exec_connections C
CROSS APPLY sys.dm_exec_sql_text(C.most_recent_sql_handle) T
where
last_read < DATEADD(minute, -5, getdate()) or last_write < DATEADD(minute, -5, getdate())

here is a image of running the above query on my sql server




Once you know what was run last on the sql server for a connection, you just need to check where in code it’s getting called and fix it.

Hope this little article was helpful.

Friday, December 30, 2011

Concatenate values of a column to display in a row (also use of cross apply)

Hi Everyone,
i will be demonstrating how to use xml path to concatenate column values into a single value. we will also use cross apply to get multiple values(more than one column) to be selected in a select statement.

For demonstrating the above, lets consider the following scenario

  1. i have a few boxes
  2. i have a few gifts (some new, some old ... ahm... age old regifting)

i need to record information about each box, need to store data about each gift ( also an indicator marking if it is a new gift or an unwanted old gift that i want to give away) . i also need to store information about which box holds which gifts.

lets look at the structure of the tables

CREATE TABLE MyBoxes
(
BoxID INT IDENTITY(1,1) CONSTRAINT MyBoxes_BoxID PRIMARY KEY
, BoxName VARCHAR(20) NOT NULL
)
GO
CREATE TABLE MyGifts
(
GiftID INT IDENTITY(1,1) CONSTRAINT MyGifts_GiftID PRIMARY KEY
, GiftName VARCHAR(50) NOT NULL
)
GO



now lets populate these two tables with some data

INSERT INTO MyBoxes(BoxName)
SELECT TOP 10000 'B' + CAST(ROW_NUMBER() OVER(ORDER BY C1.NAME) AS VARCHAR(20))
FROM sys.columns C1
CROSS JOIN sys.columns C2

INSERT INTO MyGifts(GiftName)
VALUES('G1'), ('G2'), ('G3'), ('G4'), ('G5')


The above statements would have inserted 1000 boxes into MyBoxes table and 5 gift types into MyGifts table.

Now we need a table to store information on which gift (of a perticular type) is strored in which box and also if the gift is new or not.

CREATE TABLE BoxedGifts
(
BoxID INT
CONSTRAINT FK_BoxedGifts_BoxID_MyBoxes_BoxID
FOREIGN KEY REFERENCES MyBoxes(BoxID)
, GiftID INT
CONSTRAINT FK_BoxedGifts_GiftID_MyGifts_GiftID
FOREIGN KEY REFERENCES MyGifts(GiftID)
, NewGift BIT
, CONSTRAINT PK_BoxedGifts
PRIMARY KEY
(
BoxID
, GiftID
)
)


lets populate the above table with some data

INSERT INTO BoxedGifts(BoxID, GiftID, NewGift)
SELECT B.BoxID, G.GiftID, G.NewGift
FROM MyBoxes B
CROSS APPLY
(
SELECT TOP (B.BoxID % 5) GiftID, CAST( (GiftID % 3) AS BIT) AS NewGift
FROM MyGifts
) G


i have used "cross apply" and "top" to put gift items in such a way that for every five boxes, the pattern of type and number of gifts will repeat.

Now i need to get back the information in the following way:
i want to display each box along with the new gifts and old gifts i have put in it.

this can be accomplished in the following way :

SELECT BoxID
, ISNULL((
SELECT CAST(GiftID AS VARCHAR(10)) + ','
FROM BoxedGifts BG
WHERE
BG.BoxID = B.BoxID
AND NewGift = 0
FOR XML PATH('')
), '') AS OldGifts
, ISNULL((
SELECT CAST(GiftID AS VARCHAR(10)) + ','
FROM BoxedGifts BG
WHERE
BG.BoxID = B.BoxID
AND NewGift = 1
FOR XML PATH('')
), '') AS NewGifts

FROM MyBoxes B

the above query will give us the required output. lets now look at how the same can be accomplished using a cross apply.
SELECT BoxID
, G.OldGifts
, G.NewGifts
FROM MyBoxes B
CROSS APPLY (
SELECT ISNULL((
SELECT CAST(GiftID AS VARCHAR(10)) + ','
FROM BoxedGifts BG
WHERE
BG.BoxID = B.BoxID
AND NewGift = 0
FOR XML PATH('')
), '') AS OldGifts
, ISNULL((
SELECT CAST(GiftID AS VARCHAR(10)) + ','
FROM BoxedGifts BG
WHERE
BG.BoxID = B.BoxID
AND NewGift = 1
FOR XML PATH('')
), '') AS NewGifts
) G



result of running above queries can be seen in the following image:



hope you find this article informative.

Monday, December 19, 2011

Storing Unicode string into VARBINARY (MAX)

I came across an interesting issue while working on a client project a couple of days ago. I thought it might be a good idea to share this information.
We had to populate a “data” field which is of type “varbinary(max)”, the content of which is picked up by sharepoint to index.
We had two types of items

  1. Items which have files (these files were directly written to the ‘data’ field in their binary form)
  2. Items which didn’t have any files (the ‘data’ field needed to be populated with the title and description of item)
There were no issues with files, but when ‘data’ field was populated with title and description, nothing from that field was getting indexed.
Both the title and description fields where of type ‘nvarchar(max)’.

Let’s look at what happens when you write data of ‘varchar(max)’ and ‘nvarchar(max)’ into the ‘varbinary(max)’ field.

--DECLARING FIELDS TO STORE DATA IN DIFFERENT DATA TYPES
DECLARE @MyUnicodeName NVARCHAR(MAX)
DECLARE @MyName VARCHAR(MAX)
DECLARE @BinaryFieldForUnicode VARBINARY(MAX)
DECLARE @BinaryField VARBINARY(MAX)

--POPULATE NAME VARIABLES
SELECT @MyUnicodeName = N'KHAJA'
SELECT @MyName = 'KHAJA'

--POPULATE BINARY VARIABLES
SELECT @BinaryField = CAST(@MyName AS VARBINARY(MAX))
, @BinaryFieldForUnicode = CAST(@MyUnicodeName AS VARBINARY(MAX))

--DISPLAY BINARY VARIABLES
SELECT @BinaryField "Binary Field"
SELECT @BinaryFieldForUnicode "Binary Unicode Field"


Upon running the above code the following result is returned :


It is evident that it has extra ‘00’ for the unicode data.
If you were to read these fields and write it to a text file (using a program),


Data in txt file
Binary Field KHAJA
Binary Unicode Field K H A J A




Here you can see that the txt file for Binary Unicode field has extra spaces.
Problems/Issues
  1. When you write the content to a txt file, a space is appended to each character written. Which makes it not really useful for anything(in addition to the extra storage required to save the data)
  2. Full text search fails on the data stored in the varbinary(max) column.

Fix
the fix is simple, just do the following when storing the Unicode string into varbinary(max)
SELECT @BinaryFieldForUnicode = 0xFFFE + CAST(@MyUnicodeName AS VARBINARY(MAX))

Adding the byte order mark 0xFFFE at the start will fix both the issues. If you write data from varbinary(max) field to txt file it will write the proper text without extra space characters and the full text search will pick up proper data to index.

Hope this article was helpful. If you have any questions or comments I will be happy to answer.

Friday, December 31, 2010

Use of Rank and/or Cross Apply to get top n items from an ordered grouping

If you want to get the max or min (or an other accumulating function) out of a grouping, a simple group by clause is more than enough.
Lets consider a scenario where you store orders in a table for different products and you want to get the top two orders for each product based on the quantity. For doing this we can't rely upon simple group by query. To accomplish this task, we can either use a ranking function with a common table expression or a statement with cross apply. For the rest of the article we will be looking at applying these two techniques (performance stats of each technique is also provided).

As usual lets start by creating our test tables and populating them with test data.
We will create two tables
1. Products (which will hold info about different products)
2. Orders (which will hold info about all the orders placed for products)

Tables are created using the following scripts


CREATE TABLE Products
(
ProductID INT IDENTITY(1,1) PRIMARY KEY
, ProductName VARCHAR(50)
)

CREATE TABLE Orders
(
OrderID INT IDENTITY(1,1) PRIMARY KEY,
ProductID INT NOT NULL FOREIGN KEY REFERENCES Products(ProductID),
Quantity INT NOT NULL
)


Having created our tables, its time to populate them with test data using the following scripts

INSERT INTO Products(ProductName)
VALUES('P1'), ('P2'), ('P3'), ('P4'), ('P5'),('P6'), ('P7'), ('P8'), ('P9'), ('P10')

INSERT INTO Orders(ProductID, Quantity)
SELECT ((ROW_NUMBER() OVER( ORDER BY c1.column_ID)) % 10)+ 1 , ROW_NUMBER() OVER( ORDER BY C2.column_ID)
FROM sys.columns c1, sys.columns c2


Now lets create an index on our Orders table to make our queries run faster


CREATE INDEX IDX_Orders_ProductID_Quantity ON Orders( ProductID ASC, Quantity ASC)


First Method
Lets look at using Rank() function

RANK () : Returns the rank of a function with in the partition of a result set

Syntax :
RANK () OVER ( [ ] )

Our ranking query looks like this



;WITH ProductOrders AS
(
SELECT P.ProductName, O.OrderID, O.Quantity, RANK() OVER(PARTITION BY O.ProductID ORDER BY O.Quantity DESC) AS OrderRank
FROM Products P
INNER JOIN Orders O
ON P.ProductID = O.ProductID
)

SELECT *
FROM ProductOrders
WHERE OrderRank < 3


I also managed to capture some stats (io and time) data

(20 row(s) affected)
Table 'Orders'. Scan count 10, logical reads 486, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Products'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 218 ms, elapsed time = 217 ms.



Second Method
CROSS APPLY : The APPLY operator allows you to invoke a table-valued function for each row returned by an outer table expression of a query. CROSS APPLY returns only rows from the outer table that produce a result set from the table-valued function

Query using CROSS APPLY



SELECT P.ProductName, O.OrderID, O.Quantity
FROM Products P
CROSS APPLY ( SELECT TOP 2 *
FROM Orders
WHERE
ProductID = p.ProductID
ORDER BY Quantity DESC
) O




Stats for this query are as follows:


(20 row(s) affected)
Table 'Orders'. Scan count 10, logical reads 30, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Products'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 15 ms, elapsed time = 1 ms.



Both methods described above give us the required output, but using the second method with cross apply seems to be much more efficient.

As usual i hope you find this article interesting. Any comments, questions and suggestion are more than welcome.

Tuesday, December 21, 2010

Navigating through hierarchy using Recursive Common Table Expressions

Its quite often that we find the need to traverse up or down the hierarchy of a tree.
For example :
1. to find everyone working in an organization ( Manager -> Sub Manager -> Employee)
2. to list the products, the components which make up the product right down to the individual parts that make the components.

To accomplish these things, we had to use temporary tables, cursors or loops and some terminating logic. The introduction of recursive CTEs makes it much more efficient and easy.

Before we begin with the usage of CTEs, lets have a brief introduction.
Common Table Expression:
A common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. Unlike a derived table, a CTE can be self-referencing and can be referenced multiple times in the same query.

Basic Structure of a Recursive CTE :

WITH cte_name ( column_name [,...n] )
AS
(
CTE_query_definition –- Anchor member is defined.
UNION ALL
CTE_query_definition –- Recursive member is defined referencing cte_name.
)

-- Statement using the CTE
SELECT *
FROM cte_name

The anchor part of the recursive CTE defines the initial data set. This inital data set forms input for the first recursive iteration, the output of which forms the input for the next recursive iteration. Recursion continues till an empty set is returned.





Now let us create some test tables and populate them with dummy data.



Query to create our test table



CREATE TABLE RecursiveCTETable
(UserID INT PRIMARY KEY,
ManagerID INT)



now lets create an index on ManagerID column to make things faster for retrieval



CREATE INDEX IDX_ManagerID ON RecursiveCTETable(ManagerID)



Here is an easy way of populating our test data






;WITH Populator AS(
select ROW_NUMBER() over(order by c1.object_id) RowNum from sys.columns C1 cross apply sys.columns C2
)

INSERT INTO RecursiveCTETable(UserID, ManagerID)
SELECT RowNum - 1, RowNum/7
FROM Populator

UPDATE RecursiveCTETable
SET ManagerID = NULL
WHERE
UserID = 0




Let us see how things used to be done using temp tables:




CREATE TABLE #OrganizationalHierarchy(ID INT IDENTITY(1,1) PRIMARY KEY, UserID INT, ManagerID INT)
CREATE INDEX [IDX_TempOrganizationalHierarchy_UserID] ON #OrganizationalHierarchy(UserID)
CREATE INDEX [IDX_TempOrganizationalHierarchy_ManagerID] ON #OrganizationalHierarchy(ManagerID)


DECLARE @COUNT INT, @IdentNum INT
SELECT @COUNT = 0, @IdentNum = 0

INSERT INTO #OrganizationalHierarchy(UserID, ManagerID)
SELECT UserID, ManagerID
FROM RecursiveCTETable
WHERE
ManagerID IS NULL

SET @COUNT = @@ROWCOUNT

WHILE(@COUNT > 0)
BEGIN
INSERT INTO #OrganizationalHierarchy(UserID, ManagerID)
SELECT R.UserID, R.ManagerID
FROM #OrganizationalHierarchy H
INNER JOIN RecursiveCTETable R
ON H.UserID = R.ManagerID
WHERE
ID > @IdentNum

SET @COUNT = @@ROWCOUNT
SET @IdentNum = @@IDENTITY - @COUNT
END


SELECT * FROM #OrganizationalHierarchy

DROP TABLE #OrganizationalHierarchy

The above query runs with the following stats




Now lets use the Recursive CTE to solve the same problem


;WITH OrganizationalHierarchy AS(
SELECT UserID, ManagerID
FROM RecursiveCTETable
WHERE
ManagerID IS NULL
UNION ALL
SELECT R.UserID, R.ManagerID
FROM RecursiveCTETable R
INNER JOIN OrganizationalHierarchy H
ON H.UserID = R.ManagerID
)

SELECT * FROM OrganizationalHierarchy


Both the queries provide same results.

Its also evident from the stats below that Recursive CTE's not only provide a simpler code but also are efficient.




Be sure to provide right indexes for the Recursive CTE's to work properly.
Hope this article is helpful and any comments or questions are highly appreciated.