I wanted to make a MiniProfiler dashboard that would show stats of what we have profiled. Of course, to have good stats, the metrics needs to be saved in a database. I have studied your code to have an idea of how i could do that and i have a few questions for you.
Firstly, i wanted to extand your IStorage interface and use the code that was already there but i saw one major problem or maybe i missed something. The problem i see is when you use, for example, SqlServerStorage, the executed code is not async. I already saw this problem with your Sqlite demo. The webpages are really slow to show on my computer because of the storage mechanism. Since that i don’t want to slow down the user because i’m profiling it, this is a no go on that direction.
So my thoughts was to make another storage mechanism that would be async. When the profiler do his job, it will also call the second storage module to store permanantly the data somewhere without slowing down the user. But after that, i said to myself “why would duplicate the logic of the storage only to get what i need”. Well, this is where i think you could help me!
Like i said, maybe i missed something but i think your Storage module should be revised. I think that natively, the storage that is used to show the profiled metrics to the user should be really quick: memoryStorage and other quick storage like that. If we want persistance storage, then another module would do this job in async mode: database and alike.
So to summarize, we might need 2 storages module:
I have no experience with async function so maybe there’s an easier solution for that. I would like to have your thoughts if possible.
I also like to see a good dashboard with some powerful & meaningful matrix, this is indeed very much useful to profile application in production use.
I have reviewed the demo with SqlServerStorage and I feel that, there should be no need to implement a complete new storage module, and if we can use the existing module that will be great. I see it uses SQLLite but is also compatible to store results into MSSQL as well.
So the tasks that needs to be done may be..
And then a final layer of some useful analytics that help find slowest queries, and offer some useful matrix with important information.
What do you think?
Hi Alex, I would like you see this:
the reason why i talked about 2 storages providers was because i had a few problems with the demo and Sqlite. Everything worked great but the read/write to Sqlite was pretty slow. So in the MiniProfiler gui, i had something like 300ms to generate my webpage but in fact, i had to wait 4-5 seconds before the page showed up.
This is the reason why i thought it would be better to have 2 storages providers. One for the quick access and for managing the miniprofiler GUI and one for long term storage that would be managed async. So if it take 2 seconds to write to your database, the user will not be affected.
The guy that builded the mini-profiler dashboard should have the same problem too. Maybe his database is quick enought to not let the user be affected by the time it take to write to the database but surely he have to wait to the database to do his job before the page is rendered. I will send him an email to know more about that.
Anton Vishnyak here. I’m the one who created the Mini-Profiler Dashboard. I did not run into as many performance issues as you guys seem to be because we’re using MS Sql Server at JobSeriously and it is pretty fast at handling these quick writes.
That being said, you are going to face a few challenges with your mission. If you spin off the database write in an async fashion on every request you will be stealing threads from IIS to handles client requests. Effectively, you cut your “capacity” in half because at one time half your threads may be writing to the database.
If you batch things up in volatile memory there is no guarantee that IIS won’t recycle your app pool in between. That means you will be facing some data loss in an unpredictable fashion. However, if that is ok with you, you could use the HttpCache construct built into ASP.NET. Create one HttpCache key and use that as your session storage provider. Set the cache to timeout every hour or so and handle the cache timeout event in your global.asax. At that time you can write everything batched up in the cache to your database.
Kind Regards, Anton Vishnyak
i understand that the storage to mssql can be fast (and to other storage too). But in theory, it is possible that you face a slow insert or read from the database sometimes in the life of your application and that the user can be impacted by that.
It would be great to have Sam’s opinion and how he would solution that.
Yeah, I am not a fan of adding yet another interface, a simple way to achieve this would be writing a redis based storage. It is lightning fast at writing. In fact even SQL CE would possibly be fast enough at writing. Sqlite is notoriously problematic with concurrent writes.
Any solutions that define an “ephemeral, machine tied” store are very risky when it comes to server farms. They just don’t work with load balancing.
At SO we used redis, it scaled fine here.
Do you have an idea how we could implement something that will not hurt client performance? Redis might be a good choice but not everyone will have that in their architecture. It’s good when we have the choice to store the data where we want.
Have a service that queue the read/write to database? Using async? Someone in this thread have said this is not a good idea but i didnt understand completly the why it is not good for that.
What would be your approach? Because like i said, database can be quick enought to be invisible to the user wait time but there’s a risk for a slow down here and there.