SQL KBs
Sunday, March 23, 2008
SSIS Variables in Execute SQL Task
(credits go to Kirk Haselden):
When using an imput variable in Execute SQL Task:
OLEDB takes ? in the query and a number as name (0, 1, ...)
ODBC takes ? in the query and a number as name (1, 2, ...)
ADO takes ? in the query and a @Variable as name
ADO.Net takes @Variable in both the query and the name
Also make sure to choose the correct datatype for the parameter!
Monday, February 4, 2008
SQL error 9002 or 1105 - Insufficient Disk Space
If this error happens with normal database update, to solve the problem, free disk space on any disk drive containing the log file or data file for the related database.
If this error happens during database recovery, the database will be marked as RESOURCE PENDING and will be inaccessible. To solve the problem:
1. Free disk space on any disk drive containing the log file for the related database
2. Use ALTER DATABASE SET ONLINE to restart the database
Wednesday, January 23, 2008
Performance Counters to watch
Hardware Counters
Memory \ Available Bytes Shows the available amount of physical memory on the server. An acceptable output for this may vary widely based on how much physical memory is in the machine. If you have 2GB of RAM installed on the machine, it is common to see SQL Server use 1.7GB of RAM. If no other processes are running on your SQL Server, make sure you have at least 80MB available for Windows at any given time. If you see this counter below that amount, I would recommend buying additional RAM immediately.
Memory \ Pages/sec Shows the number of pages that are read from or written to disk. This causes hard page faults, which cause SQL Server to go to page file versus memory. If this counter averages 20, you may want to add additional RAM to stop the paging.
Network Interface \ Bytes total/sec This counter shows the amount of traffic through your network interface in bytes per second. Once you do your baseline (I'll discuss this in a moment), you'll know you have a problem when this number drops or rises a huge amount.
Paging File \ % Usage Similar to the Memory \ Pages/sec counter, this shows the percentage of the page file that is being utilized. If you see more than 70 percent of your page file being utilized, look into more RAM for your server.
Physical Disk \ % Disk Time This counter shows how active your disk is in percentage form. If this counter sustains an average above 70 percent, you may have contention with your drive or RAM.
SQL Server Counters
SQLServer:Access Methods \ Full Scans/sec This shows the DBA how many full table or index scans are occurring per second. If this number is significantly higher than your baseline, the performance of your application may be slow.
SQLServer:Buffer Manager \ Buffer Cache Hit Ratio This shows the ratio of how many pages are going to memory versus disk. I like to see this number as close to 100 percent as possible, but generally 90 percent is very respectable. If you see this number as low, it may mean that SQL Server is not obtaining enough memory from the operating system.
SQLServer:Database Application Database \ Transactions/sec Shows the amount of transactions on a given database or on the entire SQL Server per second. This number is more for your baseline and to help you troubleshoot issues. For example, if you normally show 120 transactions per second as your baseline and you come to work one Monday and see your server at 5,000 transactions per second, you will want to question the activity on your server.
SQLServer:General Statistics \ User Connections Like the transactions per second, this counter is merely used for creating a baseline on a server and in the troubleshooting process. This counter shows the amount of user connections on your SQL Server. If you see this number jump by 500 percent from your baseline, you may be seeing a slowdown in your activity due to a good response from your marketing campaign.
SQLServer:Latches \ Average Latch Wait Time (ms) Shows the average time for a latch to wait before the request is met. If you see this number jump high above your baseline, you may have contention for your server's resources.
SQLServer:Locks \ Lock Waits/sec Shows the number of locks per second that could not be satisfied immediately and had to wait for resources.
SQLServer:Locks \ Lock Timeouts/sec This counter shows the number of locks per second that timed out. If you see anything above 0 for this counter, your users will experience problems as their queries are not completing.
SQLServer:Locks \ Number of Deadlocks/sec This counter shows the number of deadlocks on the SQL Server per second. Again, if you see anything above 0, your users and applications will experience problems. Their queries will abort and the applications may fail.
SQLServer:Memory Manager \ Total Server Memory Shows the amount of memory that SQL Server has allocated to it. If this memory is equal to the amount of total physical memory on the machine, you could be experiencing contention since you're not leaving Windows any RAM to perform its normal operations.
SQLServer:SQL Statistics \ SQL Re-Compilations/sec This counter shows the amount of SQL recompiles per second. If this number is high, stored procedure execution plans may not be caching appropriately. Like other counters, this needs to be placed into a baseline and watched to make sure it's not moving radically from that baseline.
Trace flag
To enable a trace: DBCC Traceon (flag) or DBCC Traceon (flag,-1). The later one apply the trace to all connections whereas the first only apply to current connection.
To disable a trace: DBCC Traceoff (flag)
To check trace status: DBCC TraceStatus(-1)
Deadlock Information (1204)
This commonly used trace flag detects deadlocks and outputs the deadlock information. I'll cover much more detail about deadlocks and this trace flag in the next chapter.
Detailed Deadlock Information (1205)
This trace flag sends detailed information about the deadlock to the error log.
Disable Parallel Checking (2528)
You can use this trace flag to disable SQL Server from using any processor other than the primary processor when performing consistency checks (DBCCs). If you have a multiprocessor machine running a DBCC command, enabling this flag worsens performance considerably. (SQL Server uses multiple processors for running a DBCC command starting with SQL Server 2000.)
Network Database Files (1807)
SQL Server will not allow you to create a database or log file on a networked drive by default. If you attempt to do this, you receive error 5105, which states 'Device Activation Error.' The 1807 trace flag provides a workaround for this restriction, and you can create database files on a mapped drive or UNC path.
However, just because you can do something doesn't mean you should. This trace is undocumented for a good reason. It is an incredibly bad idea to place a database file on a network drive. The support for this feature was added to help vendors like Network Appliance ( http://www.netapp.com/). Storing data on a network drive opens another potential can of worms, because you must monitor and provide redundancy for that drive. If network connectivity is interrupted, your database goes down.
Send Trace Output to Client (3604)
Trace flag 3604 is the most commonly used trace flag. It sends the output of a trace to the client. For example, before you can run DBCC PAGE, which views data page information, you must run this trace flag.
Send Trace Output to Error Log (3605)
This trace is similar to trace flag 3604, but this flag sends the results to the error log.
Skip Automatic Recovery (3607)
Trace flag 3607 skips the recovery of databases on the startup of SQL Server and clears the TempDB. Setting this flag lets you get past certain crashes, but there is a chance that some data will be lost.
Skip Automatic Recovery Except Master (3608)
This trace is similar to 3607, but the TempDB in this case is not cleared and only the master database is recovered.
Log Record for Connections (4013)
This trace flag writes an entry to the SQL Server error log when a new connection is established. If you set this option, your error log can fill up quickly. For each connection that occurs, the trace flag writes two entries that look like this:
Login: sa saSQL Query Analyzer(local)ODBCmaster, server process ID (SPID): 57, kernel process ID (KPID): 57.
Login: sa XANADUsaSQL Query Analyzer(local)ODBCmaster, server process ID (SPID): 57, kernel process ID (KPID): 57.
Skip Startup Stored Procedures (4022)
This is a handy trace flag for troubleshooting. It forces SQL Server to skip startup stored procedures. This is especially useful if a stored procedure has been altered and causes harm to your system. After you set this trace flag, you can then debug the stored procedure and set it back to its original state.
Detailed Linked Server error message (7300)
The error messages that linked servers return are sometimes very generic. Turn on the 7300 trace flag if you want to receive more detailed information.
Ignore All Index Hints (8602)
Trace flag 8602 is a commonly used trace flag to ignore index hints that are specified in a query or stored procedure. This is a fantastic option when you're trying to determine if an index hint is hurting more than helping. Rather than rewriting the query, you can disable the hint using this trace flag and rerun the query to determine if SQL Server is handling the index selection better than the index hint.
Disable Locking Hints (8755)
Trace flag 8755 will disable any locking hints like READONLY. By setting this, you allow SQL Server to dynamically select the best locking hint for the query. If you feel the query's locking hint may be hurting performance, you can disable it and rerun the query.
Disable All Other Hints (8722)
Lastly, the 8722 trace flag will disable all other types of hints. This includes the OPTION clause.
Tip: By running all three 8602, 8755, and 8722 trace flags, you can disable all hints in a query. If you feel your performance is being negatively affected by a hint, you can set these rather than rewrite all the queries while you test. Generally speaking, there's no reason to place hints on queries in SQL Server 7.0 or 2000.
Tuesday, January 22, 2008
Upgrade SQL2000 to SQL2005
To maintain a production system using SI (Snapshot Isolation), you should allocate enough disk space for tempdb so that there is always at least 10 percent free space. If the free space falls below this threshold, system performance might suffer because SQL Server will expend more resources trying to reclaim space in the version store. The following formula can give you a rough estimate of the size required by version store. For long-running transactions, it might be useful to monitor the generation and cleanup rate using Performance Monitor, to estimate the maximum size needed. [size of common version store] = 2 * [version store data generated per minute] * [longest running time (minutes) of the transaction]
Sunday, January 20, 2008
Estimate table size for growth
This code is mentioned in Inside Microsoft SQL Server 2005 - The Storage Engine
It contains 4 sps: sp_EstTableSize, sp_EstTableSizeNCI, sp_EstTableSizeCI, sp_EstTableSizeData
Modified code to output size in Mb. - 17 Jan 2008
The procedure calculates the storage requirements for the table and all indexes by extracting information from the sys.indexes, sys.columns, and sys.types views.
The result is only an estimate when you have variable-length fields. The procedure has no way of knowing whether a variable-length field will be completely filled, half filled, or mostly empty in every row, so it assumes that variable-length columns will be filled to the maximum size.
If you know that this won't be the case with your data, you can create a second table that more closely matches the expected data. For example, if you have a varchar(1000) column that will only rarely use the full 1000 bytes, and 99 percent of your rows will use only about 100 bytes, you can create a second table with a varchar(100) column and run sp_EstTableSize on that table.
*/
if object_id('dbo.sp_EstTableSizeNCI', 'P') IS NOT NULL drop procedure dbo.sp_EstTableSizeNCI
go
@ObjId int, @Num_Rows int, @DataPages int, @CIndex_Row_Size int, @Has_UniqueClusteredIndex int,
@NCIPages int OUTPUT
as
/*
**
** Do not call the procedure directly. It will be called
** from the main procedure: sp_estTableSize
**
**
*/
declare @Num_Key_Cols as smallint, @Num_Variable_Key_Cols as smallint
declare @Fixed_Key_Size as int, @Max_Var_Key_Size as int
declare @Index_Null_Bitmap as smallint, @Variable_Key_Size as int
declare @Index_Row_Size as int, @Index_Rows_Per_Page as int
declare @Free_Index_Rows_Per_Page as int, @T as int
declare @Est_Num_Pages as int
declare @Variable_Col as tinyint, @Var_Size as int
declare @LevelPages as int, @CLevel as int, @PrevCount as int
set @PrevCount = 0
-- Get the index id's of the non-clustered indexes
declare NCI_cursor insensitive cursor
for
select indid, name
from sysindexes
where id = @ObjId
and indid < 255 -- exclude pointers to text / ntext / image pages
and IndexProperty(@ObjId, name, 'IsClustered') = 0
and IndexProperty(@ObjId, name, 'IsStatistics') = 0
and IndexProperty(@ObjId, name, 'IsHypothetical') = 0
for read only
set @TableName = object_name(@ObjId)
begin -- cursor loop
set @Num_Key_Cols = 0
set @Num_Variable_Key_Cols = 0
set @Max_Var_Key_Size = 0
set @indkey = 1
begin
set @IndexColName = INDEX_COL(@TableName, @Indid, @indkey)
if @IndexColName IS NULL Break
@Variable_Col = st.variable,
@Var_Size = sc.length * st.variable
on (so.id = sc.id)
on (sc.xtype = st.xtype)
and sc.name = @IndexColName
set @Num_Variable_Key_Cols = @Num_Variable_Key_Cols + @Variable_Col
-- as a variable length column.
set @Num_Variable_Key_Cols = @Num_Variable_Key_Cols +
case when @CIndex_Row_Size > 0 and @Has_UniqueClusteredIndex = 1
then 1
else 0
end
set @indkey = @indkey + 1
then 0
else (2 + (( @Num_Key_Cols + 7) / 8 ) )
end
then 0
else 2 + @Num_Variable_Key_Cols + @Num_Variable_Key_Cols +
@Max_Var_Key_Size
end
@Fixed_Key_Size + @Variable_Key_Size + @Index_Null_Bitmap + 9
set @Index_Rows_Per_Page = 8096 / (@Index_Row_Size + 2)
select @Fill_Factor = IndexProperty (@ObjId, indid, 'IndexFillFactor')
from sysindexes
where id = @ObjId
-- According to Books Online (CREATE INDEX (T-SQL)):
-- for non-clustered indexes, space is always left on non-leaf pages
-- to accomomdate one additional row regardless of the setting of
-- FillFactor. I believe this is incorrect, and that, as in
-- SQL Server 6.x, a FillFactor from 0% to 99% results in one free row,
-- while a setting of 100% results in no free rows.
set @Free_Index_Rows_Per_Page = case when @Fill_Factor = 100 then 0 else 1 end
-- level of the index
set @CLevel = @Num_Rows
-- "The number of rows on an intermediate index page is never less than two,
-- regardless of how low the value of FILLFACTOR."
set @T = case when @T < 2 then 2 else @T end
begin
set @NCIPages = @NCIPages + @LevelPages
set @CLevel = @LevelPages
insert into #ncindexes (IndexName, IndexPages) values (@IndexName, @NCIPages - @PrevCount)
end -- cursor loop
deallocate NCI_cursor
go
@ObjId int, @Num_Rows int, @DataPages int,
@Indid smallint, @Has_UniqueClusteredIndex int,
@CIndex_Row_Size int OUTPUT, @CIPages int OUTPUT
as
/*
**
** Do not call the procedure directly. It will be called
** from the main procedure: sp_estTableSize
**
**
*/
set nocount on
declare @Fixed_CKey_Size as int, @Max_Var_CKey_Size as int
declare @CIndex_Null_Bitmap as smallint, @Variable_CKey_Size as int
declare @CIndex_Rows_Per_Page as int
declare @Free_CIndex_Rows_Per_Page as int, @T as int
declare @Est_Num_Pages as int
declare @Variable_Col as tinyint, @Var_Size as int
set @TableName = object_name(@ObjId)
set @Num_CKey_Cols = 0
then 0
else 4
end
set @Max_Var_CKey_Size = 0
set @indkey = 1
begin
set @IndexColName = INDEX_COL(@TableName, @Indid, @indkey)
if @IndexColName IS NULL Break
@Variable_Col = st.variable,
@Var_Size = sc.length * st.variable
on (so.id = sc.id)
on (sc.xtype = st.xtype)
and sc.name = @IndexColName
set @Num_Variable_CKey_Cols = @Num_Variable_CKey_Cols + @Variable_Col
set @Max_Var_CKey_Size = @Max_Var_CKey_Size + @Var_Size
set @indkey = @indkey + 1
then 0
else 2 + @Num_Variable_CKey_Cols + @Num_Variable_CKey_Cols +
@Max_Var_CKey_Size
end
set @CIndex_Rows_Per_Page = 8096 / (@CIndex_Row_Size + 2)
select @Fill_Factor = IndexProperty (@ObjId, object_name(@Indid), 'IndexFillFactor')
from sysindexes
where id = @ObjId
and indid = @Indid
case
when @Has_UniqueClusteredIndex = 1
then case when @Fill_Factor = 100 then 0 else 2 end
end
-- level of the index
set @CLevel = @DataPages -- number of pages needed to store table data
set @CIPages = 0
-- "The number of rows on an intermediate index page is never less than two,
-- regardless of how low the value of FILLFACTOR."
set @T = case when @T < 2 then 2 else @T end
begin
set @CIPages = @CIPages + @LevelPages
set @CLevel = @LevelPages
go
@ObjId int, @Num_Rows int,
@Has_ClusteredIndex int, @Has_UniqueClusteredIndex int,
@DataPages int OUTPUT
as
/*
**
** Do not call the procedure directly. It will be called
** from the main procedure: sp_estTableSize
**
**
*/
--<-- Addition #2: A non-unique clustered index on the table adds a "uniqueifer" to
-- "subsequent" instances of duplicate keys. This "uniqueifer" appears
-- not only on the index pages, but also on the leaf (data) pages.
-- See sp_EstTableSizeCI for additional information.
declare @Fixed_Data_Size as int, @Max_Var_Size as int
declare @Null_Bitmap as smallint, @Variable_Data_Size as int
declare @Row_Size as int, @Rows_Per_Page as int
declare @Free_Rows_Per_Page as int
-- column is fixed or variable
select @Num_Cols = count(*),
@Fixed_Data_Size = sum(sc.length * (1 - st.variable)),
@Num_Variable_Cols = sum(cast(st.variable as smallint)),
@Max_Var_Size = sum(sc.length * st.variable)
on so.id = sc.id
on sc.xtype = st.xtype
and ObjectProperty (so.id, 'IsUserTable') = 1
and ColumnProperty (so.id, sc.name, 'IsComputed') = 0 --<-- Addition #1
set @Variable_Data_Size = case
when @Num_Variable_Cols = 0
then 0
else 2 + (@Num_Variable_Cols + @Num_Variable_Cols) +
@Max_Var_Size
end
set @Row_Size = @Fixed_Data_Size + @Variable_Data_Size + @Null_Bitmap + 4 +
case when @Has_ClusteredIndex = 1 and @Has_UniqueClusteredIndex = 0
then 4
else 0
end
set @Rows_Per_Page = 8096 / (@Row_Size + 2)
select @Fill_Factor = case
when IndexProperty (@ObjId, name, 'IndexFillFactor') IS NULL
then 100
when IndexProperty (@ObjId, name, 'IndexFillFactor') = 0
then 100
else IndexProperty (@ObjId, name, 'IndexFillFactor')
end
from sysindexes
where id = @ObjId
and IndexProperty(@ObjId, name, 'IsClustered') = 1
set @Fill_Factor = Coalesce (@Fill_Factor, 100)
if object_id('dbo.sp_EstTableSize', 'P') IS NOT NULL drop procedure dbo.sp_EstTableSize
go
@TableName sysname = NULL,
@Num_Rows int = NULL
as
-- The Microsoft SQL Server 7.0 Books Online, plus
-- 'Inside Microsoft SQL Server 7.0' by Ron Soukup and Kalen Delaney,
-- published by Microsoft Press, plus
-- additional information generously supplied by Kalen Delaney.
if (@TableName IS NULL) or (@Num_Rows IS NULL) goto usage
-- table
declare @ObjId int, @ErrMsg varchar(255)
if (@ObjId IS NULL) OR (ObjectProperty (@ObjId, 'IsUserTable') = 0)
begin
set @ErrMsg = @TableName + ' is not a user table in the current database'
goto ErrExit
end
if (@Num_Rows <= 0)
begin
set @ErrMsg = 'Please enter a positive number for @Num_Rows (estimated number of rows)'
goto ErrExit
end
-- calculating the pages needed to store data (sp_EstTableSizeData)
-- calculating the pages needed to store the clustered index (sp_EstTableSizeCI)
-- calculating the pages needed to store each non-clustered index (sp_EstTableSizeNCI)
-- adding the pages together to arrive at a grand total
-- 'If the clustered index is not a unique index, SQL Server 7.0 adds an internal
-- value to duplicate keys to make them unique. This value is not visible to users.'
-- Thanks to Kalen Delaney for establishing that the length of the internal value is
-- 4-bytes (some Microsoft documentation incorrectly states that the length is 8-bytes).
-- Note that this 4-byte 'Uniqueifer' is added only to 'subsequent' instances of duplicate
-- keys. Since the code you are reading is designed to estimate space requirements, we can
-- not assume that the table given to us is populated with data. (If it were populated
-- with data, then it would make better sense to run the system stored procedure
-- sp_spaceused to determine the actual space requirements of the table.)
-- Since we can not assume that the table is populated, we can not make an accurate,
-- or even an informed guess, as to the distribution of data in the clustered index.
-- Thus, keeping in line with the view that we are creating an upper bound of storage
-- requirements, we assume that for a non-unique clustered index, that every row
-- pointer has this extra 4-byte Uniqueifer.
-- not only in the index pages, but also in the data pages.
-- of the sub-procedures, the determination of its existence is determined up-front.
declare @Has_ClusteredIndex as int, @Has_UniqueClusteredIndex as int
begin
set @CI_name = NULL
set @Has_UniqueClusteredIndex = 0
end
else
begin -- get the name and index id of the clustered index
select @CI_name = name, @Indid = indid
from sysindexes
where id = @ObjId
and IndexProperty (@Objid, name, 'IsClustered') = 1
end
-- Create a temporary table to hold the details on the non-clustered indexes.
create table #ncindexes
(
IndexName sysname,
IndexPages int
)
declare @DataPages as int, @CIndex_Row_Size as int, @CIPages as int, @NCIPages as int
@Has_ClusteredIndex, @Has_UniqueClusteredIndex,
@DataPages OUTPUT
exec sp_EstTableSizeCI @ObjId, @Num_Rows, @DataPages,
@Indid, @Has_UniqueClusteredIndex,
@CIndex_Row_Size OUTPUT, @CIPages OUTPUT
else
begin
set @CIPages = 0
set @CIndex_Row_Size = 0
end
@CIndex_Row_Size, @Has_UniqueClusteredIndex, @NCIPages OUTPUT
select 'Total Pages' = @DataPages + @CIPages + @NCIPages,
'Data Pages' = @DataPages,
'Clustered Index Pages' = @CIPages,
'Non-Clustered Index Pages' = @NCIPages,
'Total Size (Mb)' = (@DataPages + @CIPages + @NCIPages) * 8 / 1024
begin
print ' '
print 'Non-Clustered Index Details'
select 'Index Pages' = IndexPages, 'Index size (Mb)' = IndexPages * 8 / 1024, 'Index Name' = IndexName
from #ncindexes
order by IndexName
end
print ' '
print 'Usage:'
print ' exec sp_EstTableSize [@TableName=]userTableName, [@Num_Rows=]estimatedNumberRowsInTable'
print ' '
print 'Note that the result is an ESTIMATE of the storage to be used by the table.'
print ' '
print 'It should be treated as an upper bound of the storage requirements because it is'
print 'assumed that variable length data (varchar, nvarchar, varbinary) will use the'
print 'maximum storage on every row. That is, it is assumed that a varchar(100) field'
print 'will always contain 100 bytes of data.'
print ' '
print 'If your knowledge of the actual data is such that you may assume that on average'
print 'only 20 bytes will be stored in the varchar(100) field, then for a more accurate'
print 'estimate, create a second table definition with a varchar(20) field and use that'
print 'table as input to sp_EstTableSize.'
RETURN (0)
ErrExit:
print ' '
print @ErrMsg
RETURN (-1)
Thursday, January 17, 2008
Index
CREATE [UNIQUE] [CLUSTERED | NONCLUSTERED] INDEX index_name
ON table_name (column_name [ASC | DESC][,...n])
[WITH
[FILLFACTOR = fillfactor]
[[,] [PAD_INDEX] = { ON | OFF }]
[[,] DROP_EXISTING = { ON | OFF }]
[[,] IGNORE_DUP_KEY = { ON | OFF }]
[[,] SORT_IN_TEMPDB = { ON | OFF }]
[[,] STATISTICS_NORECOMPUTE = { ON | OFF }]
[[,] ALLOW_ROW_LOCKS = { ON | OFF }]
[[,] ALLOW_PAGE_LOCKS = { ON | OFF }]
[[,] MAXDOP = max_degree_of_parallelism]
[[,] ONLINE = { ON | OFF }]
]
SQL Server 2000 does not use ON and OFF. If you want an option set to ON, specify the option; if you don't want the option, don't specify it. SQL Server 2005 offers a backward-compatible syntax without the ON and OFF keywords, but for new development the new syntax is recommended.
To avoide index page spliting, assign the Fillfactor and PAD_Index.
Fillfactor generally applies only to the index's leaf page (the data page for a clustered index).
PAD_Index applies to the intermediate index pages, and use the same Fillfactor value.
If fillfactor is not specified, the server wilde default is used. The server wilde default value (is 0 by default) is set via the sp_configure procedure, with the fillfactor option. So if you want a fillfactor to apply to all databases on a server, set the server wilde default value.
Fillfactor and PAD_Index are only applicable when an index is created (or re-created). If there is a considerable number of transactions happending (such as bulk insert operation), you might need to rebuild the index as the bulk operation might filled up the reserved free space and caused page split which intern dropped the query performance.
When rebuild index, if no Fillfactor specified, the original Fillfactor value will be used.
Drop_Existing specifies that a given index should be dropped and rebuilt as a single transaction. It enhances performance when re-create a clustered index on a table with non-clustered indexes, as the non-clustered indexes will be ONLY re-created once after the clustered index is created and ONLY if the index keys changed (that is different index name or columns compare with the original index). Without this option, the non-clustered indexed will be re-created twice when a clustered index is re-created.
A clustered index can not be converted to non-clustered index using Drop_Existing. However, in SQL2000, a unique clustered index can be changed to non-unique index, and vice versa. In SQL 2005, all clustered indexes are unique. If you build a clustered index without specifying the UNIQUE keyword in SQL2005, SQL Server guarantees uniqueness internally by adding a uniqueifier to the rows when necessary.
Sort_in_TEMPDB: By default (without using this this option or turn it off) SQL Server uses space from the filegroup on which the index is to be created. When this option is used, sort buffers are allocated from tempdb, so much less space is needed in the source database. Also, when this option is used, performance can be grately improved if tempdb datbase is on a separate physical disk from the database you're working with. You can speed up index creation even more if tempdb database is on a faster disk than your user database and you use the SORT_IN_TEMPDB option. Alternative to using the SORT_IN_TEMPDB option, you can create separate filegroups for a table and its indexes, that is, the table is on one filegroup and its indexes are on another. If the two filegroups are on different disks, you can also minimize the disk head movement in return improve the index creation performance.