Windows Azure WebSites and Cloud Services Slow on First Request

The default configuration for windows azure websites and cloud services is to unload your application if it have not been access for a certain amount of time. It makes a lot of sense for
Microsoft to do this, as they save resources on by stopping infrequently accessed sites.

As an owner of one of these web sites, whether it’s hosted via a cloud service or in a simple azure website, it’s a pretty annoying feature, as the first user accessing your site after it has been unloaded, will experience a load time of 30 seconds or more, which by today’s standards are totally unacceptable.

Luckily, there is solution.

If your Azure Website is on the standard plan, the solution is a matter of switching on the always-on feature in the configuration of your site.
alwayson

Update: The following method of having a continuesly running job ping the site every 5 minutes, to keep it warm, will not work on a free site anymore. Even the jobs are getting shutdown if the site is idle for more than 20 min, and apparently the idle detection mechanism is smart enough to filter out request made in the proposed way.
Unfortunately, this feature is only available to the sites on the standard plan, so if you are running a free or shared site, you have to look elsewhere for a solution. What the always-on feature does is simply ping your site every now and then, to keep the application pool up and running. This functionality is easy to mimic, so you have an always-on site on the free plan.

The way I choose to do it for my site http://statsofpoe.azurewebsites.net is to use the new Web Jobs feature that is in preview right now. The web jobs feature lets you run a script or executable as a continuously running job inside a website. What I did was to build a small executable that within a never-ending while loop every five minutes accesses the front-page of my website. This way my application pool is never unloaded and my site is always ready to serve users.
My code looks like this:

using System;
using System.Collections.Generic;
using System.Configuration;
using System.Diagnostics;
using System.Linq;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;

namespace SJKP.AzureKeepWarm
{
    class Program
    {
        
        static void Main(string[] args)
        {
            var runner = new Runner();
            var siteUrl = ConfigurationManager.AppSettings["SiteUrl"];
            var waitTime = int.Parse(ConfigurationManager.AppSettings["WaitTime"]);
            
            Task.WaitAll(runner.HitSite(siteUrl,waitTime));

            
        }

        private class Runner
        {
            private HttpClient client = new HttpClient();

            public async Task HitSite(string siteUrl, int waitTime)
            {
                while (true)
                {
                    try
                    {
                        var request = await client.GetAsync(new Uri(siteUrl));
                        Trace.TraceInformation("{0}: {1}", DateTime.Now, request.StatusCode);
                    }
                    catch (Exception ex)
                    {
                        Trace.TraceError(ex.ToString());
                    }
                    await Task.Delay(waitTime * 1000);
                }
            }
        }
    }
}

And here’s my configuration file with a wait time of 300 seconds (5 minutes):

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    </startup>
  <appSettings>
    <add key="SiteUrl" value="http://statsofpoe.azurewebsites.net/"/>
    <add key="WaitTime" value="300"/>
  </appSettings>
  <system.diagnostics>
    <trace autoflush="true">
      <listeners>
        <add name="configConsoleListener"
         type="System.Diagnostics.ConsoleTraceListener" />
      </listeners>
    </trace>
  </system.diagnostics>
</configuration>

One thing I noticed when you try to upload the zip file containing all your files for your Web Job, be sure that the files are directly within the zip file and not in some subfolder, as the upload otherwise will fail.

Avoid automatic recycle of Azure Cloud Services Web Role

If you have a web role, it will suffer from the same problem as the azure websites. This is not due to Microsoft trying to save money though, but simply due to the default configuration of an IIS Application pool that is set to have an idle-timeout of 20 minutes. So if you don’t change this and your site is not accessed for 20 minutes it will automatically shutdown the work process, resulting in long load times for the first user to access the site afterwards. Another default feature that you might as well disable is the default application pool recycle that is set to happen every 1740 minutes.

The simplest way to change this that I have found is to include a script with your package, which is configured to run as a startup script, every time the role is restarted.
For this to work include the following script in a startup.cmd file that you place in a folder called Startup in your web role project.

REM *** Prevent the IIS app pools from shutting down due to being idle.
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.processModel.idleTimeout:00:00:00

REM *** Prevent IIS app pool recycles from recycling on the default schedule of 1740 minutes (29 hours).
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.recycling.periodicRestart.time:00:00:00

Set the Copy to output directory to copy always so it becomes part of the package.
In the service definition.csdef for your Azure Cloud Service project you have to add the line

    <Startup>
      <Task commandLine="Startup\Startup.cmd" executionContext="elevated" />
    </Startup>

to ensure that startup.cmd script is called. The WebRole part of my ServiceDefinition.csdef files looks like the following:

  <WebRole name="StatsOfPoE.WebRole" vmsize="ExtraSmall">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    </Endpoints>
    <Imports>
      <Import moduleName="Diagnostics" />
    </Imports>
    <ConfigurationSettings>
      <Setting name="Microsoft.ServiceBus.ConnectionString" />
      <Setting name="Microsoft.ServiceBus.QueueName" />
      <Setting name="Microsoft.ServiceBus.HighPriorityQueueName" />
      <Setting name="DataConnectionString" />
      <Setting name="RefreshTime"/>
    </ConfigurationSettings>
    <Startup>
      <Task commandLine="Startup\Startup.cmd" executionContext="elevated" />
    </Startup>
  </WebRole>

When this is done and you redeploy your solution, it will no long automatically recycle or shutdown = good stuff.

9 thoughts on “Windows Azure WebSites and Cloud Services Slow on First Request

  1. Pinging Azure web apps periodically could also solve the issue effectively. CloudUp provides a simply and free service for this purpose.

  2. Hi Howard,
    I don’t think I did it for any particular reason – I was just trying to get more familiar with the async patterns. I haven’t investigated exactly how the Azure Jobs are handled by Azure, but potentially it could be beneficial to use async if you have many jobs running that does a lot of waiting. But if you only have this one job, it doesn’t matter.

  3. Hi Simon, me again. I am learning async and await and I had another question about your code. One question relates to the execution of this line:

    var request = await client.GetAsync(new Uri(siteUrl));

    I assumed that because of the “await” that this line would execute async and proceed to the next line immediately, but when I fed in siteUrls to larger and larger pages, the delay got longer as the page size was bigger, and the next line wasn’t executed until after that page was downloaded. What am I missing?

    My second question which is related is about the line:

    await Task.Delay(waitTime * 1000);

    What is the difference between awaiting a delay of x period of time vs. just implementing a synchronous delay? Because when that line executes, just as with the previous line, all program execution appears to stop until that wait is over.

    Thanks in advance…

  4. Hi howard,

    The difference between using the async await pattern instead of just doing it synchronously is how well resources are utilized.

    In web programming threads are usually a limited resources, so if you are doing traditional synchronously programming you are holding on to the thread you have been assigned until you are done. In the async pattern you give back the thread to the thread pool when you are awaiting the response (in the HttpClient example). This means that while you are waiting for the remote server to return data to you that thread can potentially serve other requests, thus you will get a better overall performance of your web application. Again in this code, the performance benefit is probably none existing.

    If you want a good example of when it matters then take a look at this blog post: http://johnring.me/?p=222

  5. The Azure website says…
    “As of March 2014, websites in Free mode can time out after 20 minutes if there are no requests to the scm (deployment) site and the website’s portal is not open in Azure. Requests to the actual site will not reset this.”

    Does this affect your solution to make sure the application pool is never unloaded?

  6. Hi Simon,
    Great post. Had a quick question. If we have a webrole with 2 instances then we have two instances of IIS running right?

    I am trying to figure out if our traffic is enough to not need the script.

  7. Hi Frank,

    I must admit that I’m no longer using the script, as I got a lot of free azure credit through my Bizspark subscription. But I still have a free site running https://statsofpoe.azurewebsites.net/ which was the site I made the script for originally. It feels fast, and responsive when I try to access it every now and then, but I will to some testing, to check that it’s in fact due to the script.

    Hi Nate,
    If you are using a cloud service with web roles (one or more doesnt really matter) Then I would just recommend that you change the default settings of the IIS to avoid the unload when the site is not accessed, instead of using the polling script which is a more clunky solution.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>