C# code to obtain Maximum SQL Server 2012 Write Performance -


I want to know whether we are already performing the fastest SQL EEE for our application.

We have created a sample application that performs a bulkopy operation in the local SQL Server database. Bulk CoP writes 100,000 rows of data from the dataTable in the Operation Memory. There is no index in the table inserted. The reason for this is that we want to achieve the maximum write speed of SQL Server only.

The schema of this table that we are putting in:

  create table [dbo] [history score values] [[history paramatid] [bininata] null, [sourstimestamp] ] [Datetime2] (7) no tap, [architaimstamp] [datetime2] (7) no tap, [valutas] [int] tap, [arcastastas] [integer] no, zero, [integer] [int] Double walled] [float] spare null, [stringvalue] [varchare] (100) special null, [ennomedetext] [courtesy] (100) [Primary]   

We measure performance from our C # code on [Pure null, [anumumericvalue] [int] spare noal, [enumerative value] [varchar] (256) special noise) .

  Public Double Display Balcony () {DateTimeTitTableCopy = DateTime.Now; Double bulkTopTimesSpenseMS = -1.0; Datatel History TotalVolumeDatabase = Make BulkCopyrections (); // Start the timer here. Tubulcopy = DateTime.Now; (SqlConnection sqlConn = ConnectDatabase ()) using {sqlConn.Open (); Using (SqlTransaction SqlTransaction SqlConnkBeginTransaction = ()) try {{using (SqlBulkCopy SqlBulkCopy = new SqlBulkCopy (sqlConn, SqlBulkCopyOptions.KeepIdentity, sqlTransaction)) {sqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_HISTORY_PARMETER_ID, SqlServerDatabaseStrings.SQL_FIELD_HISTORY_PARMETER_ID ); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_SOURCE_TIMESTAMP, SqlServerDatabaseStrings.SQL_FIELD_SOURCE_TIMESTAMP); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_VALUE_STATUS, SqlServerDatabaseStrings.SQL_FIELD_VALUE_STATUS); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_ARCHIVE_STATUS, SqlServerDatabaseStrings.SQL_FIELD_ARCHIVE_STATUS); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_INTEGER_VALUE, SqlServerDatabaseStrings.SQL_FIELD_INTEGER_VALUE); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_DOUBLE_VALUE, SqlServerDatabaseStrings.SQL_FIELD_DOUBLE_VALUE); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_STRING_VALUE, SqlServerDatabaseStrings.SQL_FIELD_STRING_VALUE); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_ENUM_NAMEDSET_NAME, SqlServerDatabaseStrings.SQL_FIELD_ENUM_NAMEDSET_NAME); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_ENUM_NUMERIC_VALUE, SqlServerDatabaseStrings.SQL_FIELD_ENUM_NUMERIC_VALUE); SqlBulkCopy.ColumnMappings.Add (SqlServerDatabaseStrings.SQL_FIELD_ENUM_TEXTUAL_VALUE, SqlServerDatabaseStrings.SQL_FIELD_ENUM_TEXTUAL_VALUE); sqlBulkCopy.DestinationTableName = SqlServerDatabaseStrings.SQL_TABLE_HISTORYSAMPLEVALUES; SqlBulkCopy.WriteToServer (historySampleValuesDataTable); } SqlTransaction.Commit (); // End the timer here. CopyCopyTimeSpentMs = DateTime.Now.Subtract (timeToBulkCopy). TotalMilliseconds; } Hold (exception before) {sqlTransaction.Rollback (); } Cleanup database (sqlConn); } SqlConn.Close (); } Return bulkCopytimeSpainM; }   

I have tried various surcharges of SqlBulkCopy.WriteToServer (): Datatable, Detarider and Detroit []

On the machine with this specs: I3-2120 CPU @ 3.30GHz 8 GB RAM Seagate Barakuda 7200.12 ST 3500413 AS 500 GB 7200 RPM

I use different surcharges per second 150K-160K rows are inserted.

Btw is not including the creation of datatel in I measure, because I just wanted to get real performance as the base of SQLBulkCopy.

I am asking now, according to our sample data and sample table, is this the most we can get out of SQL Server SE? Or is there anything that we can do to make it even faster?

Let me know if you have more information about our setup

both SSIS and BCP utility will perform better.

If you want to use SqlBulkCopy, make sure that you have included all the basics: to disable indexed and triggers, to switch to simple or bulk load retrieval model, log in, X-lock on the table and so on Also fix bachetis property (this depends on the EUR data, this can be between 100 and 5000 values) and maybe UseInternalTransaction may also be.

Here is a link to MSDN, you will probably get help:



Comments

Popular posts from this blog

Pass DB Connection parameters to a Kettle a.k.a PDI table Input step dynamically from Excel -

multithreading - PhantomJS-Node in a for Loop -

c++ - MATLAB .m file to .mex file using Matlab Compiler -