#Google Analytic Tracker

Pages

Jul 11, 2011

Profiling Tool Can be Deceiving

Last week in my sprint, I was assigned to improve the loading performance for one of our company products.  The first thing in my mind is Visual Studio 2010 profiler. The Visual Studio 2010 Profiler is so much easier to use and understand compare with the previous version.

First I measure the wall time on how long the application can save the data. To ensure I have consistent testing result, I always generate a new database with no data in it. The save process took about 1:18. Base on experience, it is important to know the wall time before doing any optimization. I did this test a number of times to ensure I get consistent results. 

However, knowing the wall time doesn’t help finding the bottleneck. This is where Visual Studio profiler become very useful.

The two basic profiling options I usually use is “Sampling” and “Instrumentation”. When I first started using profiler (JetBrain DotTrace), I always use “Instrumentation” because I thought that this is the most actuate way to get detail result. Overtime, I started using Sampling. Here are the pros and cons:

Instrumentations
Pro: Most actuate measurement, it goes though line by line to see how long each function takes
Con: extremely slow

Sampling
Pro: Fast, it doesn’t slow down your application runtime much
Con: You don’t get detail result, you get the # of function hit.

To my understand sampling is generally as good as instrumentation. Sampling works by checking to see where the heck your program is executing. If you program ran into a line of code a lot, or if your program wait at the line for too long, the sampler will hit it a lot. That means it is likely that you can optimize the code here because your program spend a lot of time on these functions.

Anyways, this is something that I got:

image

Anyways, I ran though the data and notice the following:

image

Base on the above information, it indicates that these lines are taking too much time. After inspecting the code and see how this function is used. I realized it is unnecessary to create a new DataRow objects for those data row that wasn’t deleted or detach. I could have simply return the same DataRow.

The sole purpose of the above code is to allow the problem to access a deleted data row and convert it into a data access object. Here is the code that is using it:

Code Snippet
  1.  
  2. /// <summary>
  3. /// Converts a data row to a data access object.
  4. /// </summary>
  5. /// <param name="dataRow">Data row to convert.</param>
  6. /// <returns>Newly allocated data access object.</returns>
  7. public override sd_TripPointData ToDataAccess(DataModel.sch_TripPointRow dataRow)
  8. {
  9.     dataRow = CreateCorrespondingDataRowVersion(dataRow);
  10.  
  11.     return new sd_TripPointData(
  12.         dataRow.TripPointKey,
  13.         dataRow.IsTripKeyNull() ? null : dataRow.TripKey,
  14.         dataRow.TripPointID,
  15.         dataRow.ScheduleKey,
  16.         dataRow.IsPatternPointKeyNull() ? null : dataRow.PatternPointKey,
  17.         dataRow.ArriveTime,
  18.         dataRow.DepartTime,
  19.         dataRow.IsHoldTimeNull() ? int.MaxValue : dataRow.HoldTime,
  20.         dataRow.IsMobilityKeyNull() ? null : dataRow.MobilityKey);
  21. }
  22. #endregion

 

So I converted the code to just return the DataRow if the DataRow.RowState is not marked as deleted or detached.

Here is my modified version which is slower than the original:

Code Snippet
  1. public static DataRow CreateCorrespondingDataRowVersion(this DataRow dataRow)
  2. {
  3.     DataRow newTDataRowType = dataRow;
  4.  
  5.     //Use the original version if deleted or detached if original is available.
  6.     if ((dataRow.RowState == DataRowState.Deleted || dataRow.RowState == DataRowState.Detached)
  7.         && dataRow.HasVersion(DataRowVersion.Original))
  8.     {
  9.  
  10.         //Generate a new data row with the corresponding version
  11.         //remark: you cannot access to dataRow.ItemArray.length for a deleted row.
  12.         newTDataRowType = dataRow.Table.NewRow();
  13.         for (int i = 0; i < dataRow.Table.Columns.Count; i++)
  14.         {
  15.             newTDataRowType[i] = dataRow[i, DataRowVersion.Original];
  16.         }
  17.     }
  18.  
  19.     return newTDataRowType;
  20. }

Using the profiler, I no longer see the function “CreateCorrespondingDataRowVersino()” as one of the high function calls.

However, to be sure. I check the wall time again after my code change. It was unbelievable. I ran 3 times. and it took 3:28. 10 second slower than before. To my disbelieve, I revert my change and test it again. The wall time show 3:18, which is the original time.

So where did the increased time come from?

My guess is that creating a new data row is detached from any data table. Therefore accessing a data row that is detached is faster than accessing a row that is linked to the data table.

In conclusion, when doing perform testing, always test the wall time before and after the optimization.

No comments: