We‘ve been running Derbyshire Dot Net for a number of years now and I thought it was about time that we started giving an overview of the evening’s talks, and this is something I’ll be looking to do every month from now on.
On 27th October 2016 we were hosting Bart Read who was giving a presentation on “How to Speed up .Net and SQL Server Web Apps”. The talk was scenario based and was focussed on how Bart had diagnosed performance problems for a number of his consulting clients. The group was very impressed with Bart’s presentation as he seemed to make what could be quiet a dry subject into an entertaining speech.
SQL Query Performing Slowly
The first scenario Bart introduced was that of a Customer Support Centre application. It was a traditional 3-tiered architecture (Asp.Net, nHibernate, and SQL Server) in which the SQL Server database had a linked server which was accessed via a SQL Synonym.
First, he took the application server out of the load balancer so that he could isolate performance profiling without affecting production systems. Then, using ANTS Performance Profiler, he was able to establish that there was a particular SQL query that was taking upwards of 40 seconds to complete.
Now that he had the badly performing SQL query he needed the actual parameters that were used. The individual parameters could influence the performance of the system due to indexing strategies, how SQL Server builds its statistics and many other variables. He ran Microsoft’s SQL Server Profiler tools which allows you to collect all commands sent to SQL server and recommended that you filter by Application Name (as defined in web.config), DatabaseName and potentially where time taken is greater than a defined threshold. The suggestion was made that you capture the following events in SQL Profiler:
- SQL Batch Started
- SQL Batch Complete
- RPC Statement Complete
A really useful tip from this section of the talk was that there is the option to save the profile to a ‘Trace Table’. The trace table is just a standard database table that the profiler writes to which allows you finer grained control over filtering the result set because you can run SQL queries against it.
At this point Bart explained that in some cases your application may run really slow, but when you run the SQL statement in SQL Server Management Studio it runs extremely quickly. This is due to how SQL server creates execution plans and caches them. SQL server takes a query and using statistics it has collected over time and determines the most efficient query execution plan. This plan is then stored in a cache which is implemented as a hash table. The hash key is made up of a number of factors including the query to be executed. If the query is formatted differently (e.g. has a line break) then it will produce a different hash value and will not find the plan in the cache.
After this he explained that he then went on to analyse the actual execution plan and found that there was a query filtering a table on the linked server. However, as it was a table on the linked server and not on the current database it was retrieving all 600,000 record for each request before filtering. The accepted solution in this case was to create a table valued function in the linked server and do the filtering on the correct database.
Diagnosing a .Net Memory Leak
The second scenario was for the same client but they had an issue with a .Net memory leak that they were unable to locate. This time Bart used the ANTS Memory profiler tool which allowed him to take a snapshot of memory before the offending operation and after. In this particular scenario it was identified that the issue was that the client was using Castle Windsor dependence injection with a transient lifestyle when they should have been using PerRequest. However, this part of the talk was probably the highlight due to a detailed but clearly understandable explanation of .Net Garbage Collection using the mark and sweep algorithm.
One of the key takeaways from this section was how misused the method GC.Collect() was, as it just moves Gen0 objects into Gen1 stores so will likely slow down the application more.
…and the rest
The following few scenarios weren’t quite as in depth but covered issues with deadlocking, network latency and issues with the code due to unnecessary caching. The talk finished with a roundup of browser based performance focussing on how JSFrameworks (Angular, React et al) files needing to be downloaded could affect performance. It was also discussed that javascript is subject to the same rules regarding garbage collection as mentioned earlier and google chrome browser has built in developer tools for taking memory profile snapshots.
All in all this was a very informative talk and I’d have no qualms recommending this talk for another user group. The slides are available for download at http://www.slideshare.net/bartread/longer-version-2-x-45-mins-with-break-how-to-speed-up-net-and-sql-server-web-apps. If you have any questions please feel free to get in touch with Bart ([email protected]). Bart is available for performance consultancy gigs, should you find yourself in need.
Leave a Reply