Comment by llama052

Comment by llama052 7 hours ago

12 replies

Looks like Azure as a platform just killed the ability for VM scale operations, due to a change on a storage account ACL that hosted VM extensions. Wow... We noticed when github actions went down, then our self hosted runners because we can't scale anymore.

Information

Active - Virtual Machines and dependent services - Service management issues in multiple regions

Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.

Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.

https://azure.status.microsoft/en-us/status

bob1029 6 hours ago

They've always been terrible at VM ops. I never get weird quota limits and errors in other places. It's almost as if Amazon wants me to be a customer and Microsoft does not.

  • dgxyz 5 hours ago

    Amazon isn't much better there. Wait until you hit an EC2 quota limit and can't get anyone to look at it quickly (even under paid enterprise support) or they say no.

    Also had a few instance types which won't spin up in some regions/AZs recently. I assume this is capacity issues.

    • paulddraper 3 hours ago

      The cloud isn’t some infinite thing.

      There’s a bunch of hardware, and they can’t run more servers than they have hardware. I don’t see a way around that.

      • ApolloFortyNine 21 minutes ago

        I was surprised hitting one of these limits once, but it wasn't as if they were 100% out of servers, just had to pick a different node type. I don't think they would ever post their numbers, but some of the more exotic types definitely have less in the pool.

  • arcdigital 6 hours ago

    Agreed...I've been waiting for months now to increase my quota for a specific Azure VM type by 20 cores. I get an email every two weeks saying my request is still backlogged because they don't have the physical hardware available. I haven't seen an issue like this with AWS before...

    • llama052 6 hours ago

      We've ran into that issue as well, ended up having to move regions entirely because nothing was changing in the current region. I believe it was westus1 at the time. It's a ton of fun to migrate everything over!

      That’s was years ago, wild to see they have the same issues.

  • everfrustrated 3 hours ago

    How is Azure still having faults that affect multiple regions? Clearly their region definition is bollocks.

    • ragall 39 minutes ago

      All 3 hyperscalers have vulnerabilities in their control planes: they're either single point of failure like AWS with us-east-1, or global meaning that a faulty release can take it down entirely; and take AZ resilience to mean that existing compute will continue to work as before, but allocation of new resources might fail in multi-AZ or multi-region ways.

      It means that any service designed to survive a control plane outage must statically allocate its compute resources and have enough slack that it never relies on auto scaling. True for AWS/GCP/Azure.

      • everfrustrated 10 minutes ago

        This outage talks about what appears to be a VM control plane failure (it mentions stop not working) across multiple regions.

        AWS has never had this type of outage in 20 years. Yet Azure constantly had them.

        This is a total failure of engineering and has nothing to do with capacity. Azure is a joke of a cloud.

      • tbrownaw 29 minutes ago

        > It means that any service designed to survive a control plane outage must statically allocate its compute resources and have enough slack that it never relies on auto scaling. True for AWS/GCP/Azure.

        That sounds oddly similar to owning hardware.

  • llama052 6 hours ago

    It's awful. Any other service in Azure that relies on the core systems seems to have issues trying to depend on it, I feel for those internal teams.

    Ran into an issue upgrading an AKS cluster last week. It completely stalled and broke the entire cluster in a way where our hands were tied as we can't see the control plane at all...

    I submit a severity A ticket and 5 hours later I get told there was a known issue with the latest VM image that would create issues with the control plane leaving any cluster that was updated in that window to essentially kill itself and require manual intervention. Did they notify anyone? Nope, did they stop anyone from killing their own clusters. Nope.

    It seems like every time I'm forced to touch the Azure environment I'm basically playing Russian roulette hoping that something's not broken on the backend.