Wednesday, June 12, 2019

Azure App Service Deployment Error: There is not enough space on the disk.

I recently came across an error during a deployment to one of our Content Delivery App Service instances - stopping the deployment process dead in its tracks. 

The error occurred when deploying to the CD Slot:

This occurred each time we attempted to deploy this step.

The error in full does indicate troubleshooting codes and links (oddly enough, the Microsoft link did not have any trace of the error code):
Failed to deploy web package to App Service. Error Code: ERROR_NOT_ENOUGH_DISK_SPACE More Information: Web Deploy detected insufficient space on disk. Learn more at: Error: The error code was 0x80070070.

I found it especially strange given that we saw no indication that we've hit any limit on storage space in Azure Portal.  Initially, I thought it may have something to do with storage on the build server, but was able to quickly rule out this theory by verifying that there was plenty of disk space remaining on that machine.

The error was also evident when I attempted to upload any file directly via FTP:

I began a look for some giant file(s) that may be preventing any additional files from being uploaded (since that's what made the most sense given the context of the error message itself).

By logging into the App Service via FTP, I was able to identify two IIS memory dumps located in the /LogFiles directory.  These both appear to have been taken the month prior - and simply never removed. 

After deleting both memory dump directories, I was able to restart the deployment step - which completed without errors. Direct FTP uploads were also restored.

What's up with that?

Well, your Azure App Services is tied to a particular pricing tier that dictates storage, memory, ACU, etc.  In our case, this particular App Services is configured to use the Standard standard tier - which has a 50GB limit.

By leaving the remnants of these IIS memory dumps on the App Service's storage, we must have surpassed that storage limit.

This brings up some interesting questions:

First, based on the configured pricing tier, is there a limit on how much Sitecore can store in App_Data/MediaCache or /temp folders before hitting that cap?

I assume the answer to that is yes - if your application is not setup to periodically clean stale files (which it should by default), it's possible to reach that storage limit and cause this error to surface.  In that case, the quick fix to get your deployment out would be to remove some or all temp files in the application.

Second, how can we monitor this storage limit of App Services in Azure Portal?

I'll have to circle back on this one as I don't quite have the answer to it.  It may even already exist, and I just haven't spotted it yet.   I'll update this post if I do figure that one out, but please let me know if you have the answer in the comments!


  1. Good post Gabe Streza. Doubt i have is - by default, MediaCache folder or Temp folder on Azure PAAS locates to d:\local which is a temp drive. Which i believe will get wiped out on restart. So, that should not be an issue unless and until you're storing lots of those which is getting exceed before restart.

    Answer to your second question is, you can login to your azure subscription and access /env (ex: Where you'll find both d:\home and d:\local usage.

    Also, from azure portal, you can navigate to your app service plan -> File system storage. Which will give you the overall idea.

    Calculation of disk size is little tricky. It's not fixed 50gb for appservice plan.
    "File system quota for App Service hosted apps is determined by the aggregate of App Service plans created in this region and resource group" - This mean if you have 2 app service plan in single region/resource group, your total size will be 100 gb. which will be shared across all apps in both app service plans. So, it's possible your one app may be utilizing 80 gb while other app 10 gb.

    1. Thanks for your input Pratik!

      Wow - you're absolutely right. When I navigate to the affected applications Kudu /env route, I do in fact see a red storage indication:
      D:\home usage: 102,400 MB total; 0 MB free

      Upon further investigation, I've discovered TONS of very large Azure logs in /site/wwwroot/App_Data/logs.
      This is what has been consuming so much space (in addition to those IIS Memory dumps)!

      After deleting just one folder, usage freed up ~160MB:
      D:\home usage: 102,400 MB total; 160 MB free

      However, I noticed it was fluctuating as I refreshed /env. Something was still writing to the disk.

      Turns out for both 'Detailed error messages' and 'Failed requesr TRACING' options were both turned on in the App Service logs settings (I assume for troubleshooting at some point).

      Turning those options off stopped the fluctuation. I'm now able to get the disk space under control.

      Thanks for pulling me in the right direction ��

    2. Great to hear that. Diagnostic tools if not used properly, gives hard time along with power it gives.