Menu Home

Windows Azure WebSites and Cloud Services Slow on First Request

The default configuration for windows azure websites and cloud services is to unload your application if it have not been access for a certain amount of time. It makes a lot of sense for
Microsoft to do this, as they save resources on by stopping infrequently accessed sites.

As an owner of one of these web sites, whether it’s hosted via a cloud service or in a simple azure website, it’s a pretty annoying feature, as the first user accessing your site after it has been unloaded, will experience a load time of 30 seconds or more, which by today’s standards are totally unacceptable.

Luckily, there is solution.

If your Azure Website is on the basic plan or better, the solution is a matter of switching on the always-on feature in the configuration of your site.
alwayson

Update: The following method of having a continuesly running job ping the site every 5 minutes, to keep it warm, will not work on a free site anymore. Even the jobs are getting shutdown if the site is idle for more than 20 min, and apparently the idle detection mechanism is smart enough to filter out request made in the proposed way. I have posted an update to this article that lets you avoid the timeout after 20 min.
Unfortunately, this feature is only available to the sites on the basic plan, so if you are running a free or shared site, you have to look elsewhere for a solution. What the always-on feature does is simply ping your site every now and then, to keep the application pool up and running. This functionality is easy to mimic, so you have an always-on site on the free plan.

The way I choose to do it for my site http://statsofpoe.azurewebsites.net is to use the new Web Jobs feature that is in preview right now. The web jobs feature lets you run a script or executable as a continuously running job inside a website. What I did was to build a small executable that within a never-ending while loop every five minutes accesses the front-page of my website. This way my application pool is never unloaded and my site is always ready to serve users.
My code looks like this:
[csharp]
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Diagnostics;
using System.Linq;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;

namespace SJKP.AzureKeepWarm
{
class Program
{

static void Main(string[] args)
{
var runner = new Runner();
var siteUrl = ConfigurationManager.AppSettings["SiteUrl"];
var waitTime = int.Parse(ConfigurationManager.AppSettings["WaitTime"]);

Task.WaitAll(runner.HitSite(siteUrl,waitTime));

}

private class Runner
{
private HttpClient client = new HttpClient();

public async Task HitSite(string siteUrl, int waitTime)
{
while (true)
{
try
{
var request = await client.GetAsync(new Uri(siteUrl));
Trace.TraceInformation("{0}: {1}", DateTime.Now, request.StatusCode);
}
catch (Exception ex)
{
Trace.TraceError(ex.ToString());
}
await Task.Delay(waitTime * 1000);
}
}
}
}
}
[/csharp]
And here’s my configuration file with a wait time of 300 seconds (5 minutes):
[xml]
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
</startup>
<appSettings>
<add key="SiteUrl" value="http://statsofpoe.azurewebsites.net/"/>
<add key="WaitTime" value="300"/>
</appSettings>
<system.diagnostics>
<trace autoflush="true">
<listeners>
<add name="configConsoleListener"
type="System.Diagnostics.ConsoleTraceListener" />
</listeners>
</trace>
</system.diagnostics>
</configuration>
[/xml]
One thing I noticed when you try to upload the zip file containing all your files for your Web Job, be sure that the files are directly within the zip file and not in some subfolder, as the upload otherwise will fail.

Avoid automatic recycle of Azure Cloud Services Web Role

If you have a web role, it will suffer from the same problem as the azure websites. This is not due to Microsoft trying to save money though, but simply due to the default configuration of an IIS Application pool that is set to have an idle-timeout of 20 minutes. So if you don’t change this and your site is not accessed for 20 minutes it will automatically shutdown the work process, resulting in long load times for the first user to access the site afterwards. Another default feature that you might as well disable is the default application pool recycle that is set to happen every 1740 minutes.

The simplest way to change this that I have found is to include a script with your package, which is configured to run as a startup script, every time the role is restarted.
For this to work include the following script in a startup.cmd file that you place in a folder called Startup in your web role project.
[code]
REM *** Prevent the IIS app pools from shutting down due to being idle.
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.processModel.idleTimeout:00:00:00

REM *** Prevent IIS app pool recycles from recycling on the default schedule of 1740 minutes (29 hours).
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.recycling.periodicRestart.time:00:00:00
[/code]
Set the Copy to output directory to copy always so it becomes part of the package.
In the service definition.csdef for your Azure Cloud Service project you have to add the line
[xml]
<Startup>
<Task commandLine="Startup\Startup.cmd" executionContext="elevated" />
</Startup>
[/xml]
to ensure that startup.cmd script is called. The WebRole part of my ServiceDefinition.csdef files looks like the following:
[xml]
<WebRole name="StatsOfPoE.WebRole" vmsize="ExtraSmall">
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="Endpoint1" protocol="http" port="80" />
</Endpoints>
<Imports>
<Import moduleName="Diagnostics" />
</Imports>
<ConfigurationSettings>
<Setting name="Microsoft.ServiceBus.ConnectionString" />
<Setting name="Microsoft.ServiceBus.QueueName" />
<Setting name="Microsoft.ServiceBus.HighPriorityQueueName" />
<Setting name="DataConnectionString" />
<Setting name="RefreshTime"/>
</ConfigurationSettings>
<Startup>
<Task commandLine="Startup\Startup.cmd" executionContext="elevated" />
</Startup>
</WebRole>
[/xml]
When this is done and you redeploy your solution, it will no long automatically recycle or shutdown = good stuff.

Categories: Software

Tagged as:

Simon J.K. Pedersen

20 replies

  1. Pinging Azure web apps periodically could also solve the issue effectively. CloudUp provides a simply and free service for this purpose.

  2. Hi Howard,
    I don’t think I did it for any particular reason – I was just trying to get more familiar with the async patterns. I haven’t investigated exactly how the Azure Jobs are handled by Azure, but potentially it could be beneficial to use async if you have many jobs running that does a lot of waiting. But if you only have this one job, it doesn’t matter.

  3. Hi Simon, me again. I am learning async and await and I had another question about your code. One question relates to the execution of this line:

    var request = await client.GetAsync(new Uri(siteUrl));

    I assumed that because of the “await” that this line would execute async and proceed to the next line immediately, but when I fed in siteUrls to larger and larger pages, the delay got longer as the page size was bigger, and the next line wasn’t executed until after that page was downloaded. What am I missing?

    My second question which is related is about the line:

    await Task.Delay(waitTime * 1000);

    What is the difference between awaiting a delay of x period of time vs. just implementing a synchronous delay? Because when that line executes, just as with the previous line, all program execution appears to stop until that wait is over.

    Thanks in advance…

  4. Hi howard,

    The difference between using the async await pattern instead of just doing it synchronously is how well resources are utilized.

    In web programming threads are usually a limited resources, so if you are doing traditional synchronously programming you are holding on to the thread you have been assigned until you are done. In the async pattern you give back the thread to the thread pool when you are awaiting the response (in the HttpClient example). This means that while you are waiting for the remote server to return data to you that thread can potentially serve other requests, thus you will get a better overall performance of your web application. Again in this code, the performance benefit is probably none existing.

    If you want a good example of when it matters then take a look at this blog post: http://johnring.me/?p=222

  5. The Azure website says…
    “As of March 2014, websites in Free mode can time out after 20 minutes if there are no requests to the scm (deployment) site and the website’s portal is not open in Azure. Requests to the actual site will not reset this.”

    Does this affect your solution to make sure the application pool is never unloaded?

  6. Hi Simon,
    Great post. Had a quick question. If we have a webrole with 2 instances then we have two instances of IIS running right?

    I am trying to figure out if our traffic is enough to not need the script.

  7. Hi Frank,

    I must admit that I’m no longer using the script, as I got a lot of free azure credit through my Bizspark subscription. But I still have a free site running https://statsofpoe.azurewebsites.net/ which was the site I made the script for originally. It feels fast, and responsive when I try to access it every now and then, but I will to some testing, to check that it’s in fact due to the script.

    Hi Nate,
    If you are using a cloud service with web roles (one or more doesnt really matter) Then I would just recommend that you change the default settings of the IIS to avoid the unload when the site is not accessed, instead of using the polling script which is a more clunky solution.

  8. Thanks a bunch for this post! We have also been noticing this initial latency with Mobile Services, so this is really helpful to have in mind.

  9. Great post, thank you. This is exactly what I was looking for. In regards the Web Roles and the second recommendation about disabling the recycling after 29 hours: I am curious to know, as with anything good there is always a trade off. With that said, what are the benefits of doing this, and what are the possible adverse effects? Disabling the idling makes sense, I want snappy results at any given moment. But the recycling must be there for a positive reason. Could anyone shed light? Thanks again!

  10. The recycling thing is there for historic reason, as far as I know. If you have components leaking memory it is always good to have you app pool recycle every once in a while to regain the memory. You could argue that you application shouldn’t be leaking memory but apparently Microsoft felt that there are so many badly written components out there that they decided to make the recycling the default, instead of having to deal with people complaining of their web apps running out of memory, a complaint Microsoft probably would be the target off, even though it is 3rd party components. (I don’t agree with that, but maybe things were different back when they put the setting into IIS).

  11. Hi Simon,

    I am a long time asp.net developer but new to azure/mvc and I have a site that is showing this same 30 second lag between pages and I am on a P1 Premium Small environment with 1 core/1.75Gb Ram/with a scale out set to 2. By FTPing to the site I have verified my pages are compiled into a single DLL. I have turned on the always on you suggested above but to no avail. I am not maxing out on memory or CPU. Do you have any ideas of things I should try?

  12. i am also facing the same performance issue like Steve Mauldin. what is the resolution?

  13. Hi Simon

    Thank you for this post.
    I’m also facing the same issue like Steve and appalaraju. Always On is turned on but the first requests are very slow (more than 30 seconds).

    I noticed the first slow requests is not the first view but the first requests which call the database. Maybe a similar problem exists with the database server. But I didn’t find something about that.

    Thank you for reading

  14. Generally .NET applications doesn’t start fast, if they have to load a lot of dependencies into the app domain. Furthermore if you e.g. use entity framework, and database initilizers that can slow down the startup even more.

    Best practise is use a staging slot for deployment, and then warm that up, before swapping it into production, that way the first slow request is not and end user. I should probably do a blog post on the topic.

  15. Good post. It saved my day. The always on option is available in Basic plan and higher. Please update the post.

  16. Great article. Things may have changed since this article was written but you may need to include an “EXIT /B 0” at the end of startup.cmd or your Cloud Service may hang on startup.

Leave a Reply

Your email address will not be published.