Tell HN: Azure outage
881 points by tartieret 5 days ago
Azure is down for us, we can't even access the azure portal. Are other experiencing this? Our services are located in Canada/Central and US-East 2
881 points by tartieret 5 days ago
Azure is down for us, we can't even access the azure portal. Are other experiencing this? Our services are located in Canada/Central and US-East 2
It still is very decentralized. We are discussing this via the internet right now.
A lot of money and years of marketing the cloud as the responsible business decision led us here. Now that the cloud providers have vendor lock-in, few will leave, and customers will continue to wildly overpay for cloud services.
Not sure how the current situation is better. Being stranded with no way whatsoever to access most/all of your services sounds way more terrifying than regular issues limited to a couple of services at a time
From today [0].
> Big Tech lobbying is riding the EU’s deregulation wave by spending more, hiring more, and pushing more, according to a new report by NGO’s Corporate Europe Observatory and LobbyControl on Wednesday (29 October).
> Based on data from the EU’s transparency register, the NGOs found that tech companies spend the most on lobbying of any sector, spending €151m a year on lobbying — a 33 percent increase from €113m in 2023.
Gee whizz, I really do wonder how they end up having all the power!
> How did we get here?
I think the response lies in the surrounding ecosystem.
If you have a company it's easier to scale your team if you use AWS (or any other established ecosystem). It's way easier to hire 10 engineers that are competent with AWS tools than it is to hire 10 engineers that are competent with the IBM tools.
And from the individuals perspective it also make sense to bet on larger platforms. If you want to increase your odds of getting a new job, learning the AWS tools gives you a better ROI than learning the IBM tools.
A natural monopoly is a monopoly in an industry in which high infrastructure costs and other barriers to entry relative to the size of the market give the largest supplier in an industry, often the first supplier in a market, an overwhelming advantage over potential competitors. Specifically, an industry is a natural monopoly if a single firm can supply the entire market at a lower long-run average cost than if multiple firms were to operate within it. In that case, it is very probable that a company (monopoly) or a minimal number of companies (oligopoly) will form, providing all or most of the relevant products and/or services.
Maybe in a perfect world, or in a free market.
But the cloud compute market is basically centralized into 2.5 companies at this point. The point of paying companies like Azure here is that they've in theory centralized the knowledge and know-how of running multiple, distributed datacenters, so as to be resilient.
But that we keep seeing outages encompassing more than a failure domain, then it should be fair game for engineers / customers to ask "what am I paying for, again?"
Moreover, this seems to be a classic case of large barriers to entry (the huge capital costs associated with building out a datacenter) barring new entrants into the market, coupled with "nobody ever got fired for buying IBM" level thinking. Are outages like these truly factored into the napkin math that says externalizing this is worth it?
Meredith Whittaker (of Signal) addressed your question the other day: https://mastodon.world/@Mer__edith/115445701583902092
They admit in their update blurb azure front door is having issues but still report azure front door as having no issues on their status page.
And it's very clear from these updates that they're more focused on the portal than the product, their updates haven't even mentioned fixing it yet, just moving off of it, as if it's some third party service that's down.
> as having no issues on their status page
Unsubstantiated idea: So the support contract likely says there is a window between each reporting step and the status page is the last one and the one in the legal documents giving them several more hours before the clauses trigger.
The paradox of cloud provider crashes is that if the provider goes down and takes the whole world with it, it's actually good advertisement. Because, that means so many things rely on it, it's critically important, and has so many big customers. That might be why Amazon stock went up after AWS crash.
If Azure goes down and nobody feels it, does Azure really matter?
People feel it, but usually not general consumers like they do when AWS goes down.
If Azure goes down, it's mostly affecting internal stuff at big old enterprises. Jane in accounting might notice, but the customers don't. Contrast with AWS which runs most of the world's SaaS products.
People not being able to do their jobs internally for a day tends not to make headlines like "100 popular internet services down for everyone" does.
Looks to be affecting our pipelines that rely on Playwright as they download images from Azure e.g. https://playwright.azureedge.net/builds/chromium/1124/chromi... which aren't currently resolving.
We’ve been experimenting with multi-cluster failover for Kubernetes workloads, and one open-source project that actually works really well is k8gb .
It acts as a GSLB controller inside Kubernetes — doing DNS-level health checks, region awareness, and automatic failover between clusters when one goes down.
It integrates with ExternalDNS and supports multiple DNS providers (Infoblox, Route53, Azure DNS, NS1, etc.), so it can handle failover across both on-prem and cloud clusters.
It’s not a silver bullet for every architecture, but it’s one of the few OSS projects that make multi-region failover actually manageable in practice.
https://azure.status.microsoft/en-us/status says everything's fine! Any place I can read more about this outage?
I work for a cloud provider which is serious about transparency. Our customers know they are going to get the straight story from our status page.
When you find an honest vendor, cherish them. They are rare, and they work hard to earn and keep your confidence.
For us, it looks like most services are still working (eastus and eastus2). Our AKS cluster is still running and taking requests. Failures seem limited to management portal.
The outage was really weird. For me, parts of the portal worked, other parts didn't. I had access to a couple of resource groups, but no resources visible in those groups. Azure Devops Pipelines that needed do download from packages.microsoft.com didn't work.
The Microsoft status page mostly referenced the portal outage, but it was more than that.
High availability is touted as a reason for their high prices, but I swear I read about major cloud outages far more than I experience any outages at Hetzner.
I think the biggest features of the big cloud vendors is that when they are down, not only you but your customers and your competitors usually have issues at the same time so everybody just shrug and have a lazy/off day at the same time. Even on call teams reall just have to wait and stay on standby because there is very little they can do. Doing a failover can be slower than waiting for the recovery, not help at all if outage is spanned accross several region, or bring aditional risks.
And more importantly nobody lose any reputation except AWS/Azure/Google.
For one it’s statistics - Hetzner simply runs far fewer major services than hyperscalers. And the services they run are also more affluent, with larger customer bases, so downtimes are systemically critical. Therefore it’s louder.
On the merits though, I agree, haven’t had any serious issues with Hetzner.
Same with DigitalOcean. I run one box and it hasnt gone down for like 2 years
DO has been shockingly reliable for me. I shut down a neglected box almost 900 days uptime the other day. In that time AWS has randomly dropped many of my boxes with no warning requiring a manual stop/start action to recover them... But everybody keeps telling me that DO isn't "as reliable" as the big three are.
To be fair, in the AWS/Azure outages, I don't think any individual (already created) boxes went down, either. In AWS' case you couldn't start up new EC2 instances, and presumably same for Azure (unless you bypass the management portal, I guess). And obviously services like DynamoDB and Front Door, respectively, went down. Hetzner/DO don't offer those, right? Or at least they're not very popular.
Same here, I run a few droplets for personal projects and never had any issues with then.
Nope, more than the portal. For instance, I just searched for "Azure Front Door" because I hadn't heard of it before (I now know it's a CDN), and neither the product page itself [1] nor the technical docs [2] are coming up for me.
[1] https://azure.microsoft.com/en-us/products/frontdoor
[2] https://learn.microsoft.com/en-us/azure/frontdoor/front-door...
Interesting, everything else is working just fine for us. Offices across the US.
Plenty of sites are down and/or login not available. It's just really a mess.
Some exec at Microsoft told the Azure guys to ape everything Amazon does and they took it literally.
Do Microsoft still say "If the government has a broader voluntary national security program to gather customer data, we don't participate in it" today (which PRISM proved very false), or are they at least acknowledging they're participating in whatever NSA has deployed today?
PRISM wasn't voluntary. Also there are 3 levels here:
1. Mandatory
2. "Voluntary"
3. Voluntary
And I suspect that very little of what the NSA does falls into category 3. As Sen Chuck Schumer put it "you take on the intelligence community, they have six ways from Sunday at getting back at you"
I was gonna say that obv AWS hacked em to even things up.
I still can't log into Azure Gov Cloud with
https://microsoft.com/deviceloginus
Seems like they migrated the non-Gov login but not the Gov one. C'mon Microsoft, I've got a deadline in a few days.
We all need to move away from these big cloud providers. Two medium size smaller providers is enough.
-Cloudflare for R2 (object storage) and CDN (Fastly+backblaze also available). -Two VPS/Server providers with a decent reputation and mid-size (using a comparison site like https://serversearcher.com or look directly into people like Hetzner or latitude) -PlanetScale or Neon for database if you don't co-locate it, though better to use someone like digital ocean, vultr or latitude who offer databases too)
> We all need to move away from these big cloud providers.
But then who do we blame when things are down? If we manage our own infrastructure we have to stay late to fix it when it breaks instead of saying “sorry, Microsoft, nothing we can do” and magically our clients accepting that…
Updated 16:35 UTC
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
----
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
-- From the Azure status page
"Microsoft Azure will serve as the backbone of Asda’s digital infrastructure"[0]
Oh, that'll be why Scan & Go was down yesterday evening. I thought it was another instance of an iOS 26 update breaking their crappy code.
[0] https://corporate.asda.com/newsroom/2025/22/09/asda-announce...
The sad thing is - $MSFT isn't even down by 1%. And IIRC, $AMZN actually went up during their previous outage.
So if we look at these companies' bottom lines, all those big wigs are actually doing something right. Sales and lobbying capacity is way more effective than reliability or good engineering (at least in the short term).
AMZN went up almost 4 percent between the day of the outage and the day after. Crazy market.
What do you mean? That IT isn't important for Microsoft and Amazon?
That's certainly not the right conclusion.
That's a good thing. Stock prices shouldn't go down because of rare incidents which don't accurately represent how successful a company is likely to be in the future.
I looked into this before and the stocks of these large corps simply does not move when outages happens. Maybe intra-day, I don't have that data, but in general no effect.
well, at this point, 90% of the market cap of FAANGS plus Microsoft is... OMG AI LLM hype
UK, and other regions too; our APAC installation in Australia is affected.
The learning modules on https://learn.microsoft.com/ also seem to have a lot of issues properly loading.
At least MSFT is consistent: https://www.microsoft.com/en-us/ is down as well
Likely behind Azure Front Door.
Much of Xbox is behind that too.
I was having issues a few hours ago. I'm now able to access the portal, although I get lots of errors in the browser console, and things are loading slowly. I have services in the US-East region.
I have been having issues with GitHub and the winget tool for updates throughout the day as well. I imagine things are pulling from the same locations on Azure for some of the software I needed to update (NPM dependencies, and some .NET tooling).
Microsoft posted an update on X: https://x.com/AzureSupport/status/1983569891379835372?ref_sr...
"We’re investigating an issue impacting Azure Front Door services. Customers may experience intermittent request failures or latency. Updates will be provided shortly."
https://www.cbc.ca/news/investigates/tesla-grok-mom-9.695693...
This mom’s son was asking Tesla’s Grok AI chatbot about soccer. It told him to send nude pics, she says
xAI, the company that developed Grok, responds to CBC: 'Legacy Media Lies'
Interesting that everybody knows when AWS goes down but Azure needs a "Tell HN" :)
Best of luck to the teams responding to this incident.
I was a little puzzled as we got notified our apps were down, and then I tried to login in the Azure portal with no success. But the Azure status page reported no incident, so I posted here and quickly confirmed that others were impacted! They did a pretty bad job with their status page as the front door service was shown green all along
Azure goes down all the time. On Friday we had an entire regional service down all day. Two weeks ago same thing different region. You only hear about it when it's something everyone uses like the portal, because in general nobody uses Azure unless they're held hostage.
Microsoft have started putting customer status pages up on windows.net, so it must be really really bad!
For example when I try to log into our payroll provider Brightpay, it sends me here:
https://bpuk1prod1environment.blob.core.windows.net/host-pro...
Portal and Azure CDN are down here in the SF Bay Area. Tenant azureedge.net DNS A queries are taking 2-6 seconds and most often return nothing. I got a couple successful A response in the last 10 minutes.
Edit: As of 9:19 AM Pacific time, I'm now getting successful A responses but they can take several seconds. The web server at that address is not responding.
"Front Door" has to be the worst product name for a CDN I've ever heard of. I used to work for a CDN too.
I wonder if many Germans are eager to sign up for AFD.
But seriously I thought it would be the console, not a CDN.
Front Door (tm), with Back Door access for the FBI included free with your subscription! ;)
My best guess at the moment is something global like the CDN is having problems affecting things everywhere. I'm able to use a legacy application we have that goes directly to resources in uswest3, but I'm not able to use our more modern application which uses APIM/CDN networks at all.
Service Status: https://status.cloud.microsoft/ and https://azure.status.microsoft/en-us/status
“ Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025”
Two hours after the initial outage, they have finally updated the Front Door status on their status page.
The VS Code website is down: https://code.visualstudio.com/
And so is Microsoft: http://www.microsoft.com/
https://www.microsoft.com works for me (with the www subdomain).
This brings to mind this -> https://thenewstack.io/github-will-prioritize-migrating-to-a...
We're on Office 365 and so far it's still responding. At least Outlook and Teams is.
> What went wrong and why?
> An inadvertent tenant configuration change within Azure Front Door (AFD) triggered a widespread service disruption affecting both Microsoft services and customer applications dependent on AFD for global content delivery. The change introduced an invalid or inconsistent configuration state that caused a significant number of AFD nodes to fail to load properly, leading to increased latencies, timeouts, and connection errors for downstream services.
> As unhealthy nodes dropped out of the global pool, traffic distribution across healthy nodes became imbalanced, amplifying the impact and causing intermittent availability even for regions that were partially healthy. We immediately blocked all further configuration changes to prevent additional propagation of the faulty state and began deploying a ‘last known good’ configuration across the global fleet. Recovery required reloading configurations across a large number of nodes and rebalancing traffic gradually to avoid overload conditions as nodes returned to service. This deliberate, phased recovery was necessary to stabilize the system while restoring scale and ensuring no recurrence of the issue.
> The trigger was traced to a faulty tenant configuration deployment process. Our protection mechanisms, to validate and block any erroneous deployments, failed due to a software defect which allowed the deployment to bypass safety validations. Safeguards have since been reviewed and additional validation and rollback controls have been immediately implemented to prevent similar issues in the future.
So, so far they're saying it's a combination of bad config + their config-validator had a bug. Would love more details.
I noticed issues on Azure so I went to the status page. It said everything was fine even though the Azure Portal was down. It took more than 10 minutes for that status page to update.
How can one of the richest companies in the world not offer a better service?
>How can one of the richest companies in the world not offer a better service?
Better service costs money.
Unable to use Ona's GitPod through VSCode SSH - Unable to download code server from https://update.code.visualstudio.com
All of our sites went down. This is my company’s busiest time of year. Hooray.
SSO is down, Azure Portal Down and more, seems like a major outage. Already a lot of services seem to be affected: banks, airlines, consumer apps, etc.
Quite close to the recent AWS outage. Let me take a look if its a major one similar to AWS.
Any guess on what's causing it?
In hindsight, I guess the foresight of some organizations to go multi-cloud was correct after all.
This is the eternal tension for early-stage builders, isn't it? Multi-cloud gives you resilience, but adds so much complexity that it can actually slow down shipping features and iterating.
I'm curious—at what point did you decide the overhead was worth it? Was it after experiencing an outage, or did you architect for it from day one?
As someone launching a product soon (more on the builder/product side than infra-engineer), I keep wrestling with this. The pragmatist in me says "start simple, prove the concept, then layer in resilience." But then you see events like this week and think "what if this happens during launch?"
How did you handle the operational complexity? Did you need dedicated DevOps folks, or are there patterns/tools that made it manageable for a smaller team?
I don't think I would recommend multi-cloud right out of the gate unless you already have a lot of experience in the space or there is a strong demand from your customers. There's a tremendous amount of overhead with security/compliance, incident management, billing, tooling, entitlements, etc. There are a number of external factors that drove our decision to do it, resiliency is just one of them. But we are a pretty big shop, spending ~$10M/mo on cloud infra and have ~100 people in the platform management department.
I would recommend focusing on multi-region within a single CSP instead (both for workloads AND your tooling), which covers the vast majority of incidents and lays some of the architectural foundation for multi-cloud down the road. Develop failover plans for each service in your architecture (eg. planned/tested runbooks to migrate to Traffic Manager in the event AFD goes down)
Also choose your provider wisely. We experience 3-5x the number of service-impacting incidents on Azure that we do on AWS. I'm sure others have different experiences, but I would never personally start a company on Azure. AWS has its own issues, of course, but reliability has not been a major one (relatively speaking) over the past 10 years. Last week's incident with DynamoDB in us-east-1 had zero impact on our AWS workloads in other regions.
Trusting AI without sufficient review and oversight of changes to production.
Yeah, these things never happened when humans were trusted without sufficient review and oversight of changes to production.
Do you have any insight or do you just dislike AI? Incidents like this happened long before AI generated code
I don't think it's meant to be serious. It's a comment on Microsoft laying off their staff and stuffing their Azure and Dotnet teams with AI product managers.
Yesterday Amazon, today Microsoft. Are Google's cloud services going down tomorrow?
throwback to when they deleted a customer's entire account! https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...
They had a pretty massive one earlier this year. https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
This isn't GCP's fault, but the outage ended up taking down Cloudflare too, so in total impact I think that takes the cake.
fairly certain they had a significant multi region outage within the past few years. I'll try to find some details to link.
Few customers....few voices to complain as well.
It's the DNS https://dnschecker.org/#A/get.helm.sh is unreachable
Does (should, could) DownDetector also say what customer-facing services are down, when some infrastructure is unworking? Or is that the info that the malefactors are seeking?
Cant access certain banking websites in the UK, I am assuming it because of this.
Unable to access the portal and any hit to SSO for other corporate accesses is also broken. Seems like there's something wrong in their Identity services.
https://login.microsoftonline.com/ is down, so that's fun
I know how to fix this but this community is too close minded and argumentative egocentric sensitive pedantic threatened angry etc to bother discussing it
Even if the cloud providers have much better reliability than most on-prem infra, the failure correlation they induce negates much of the benefit.
Azure portal currently mostly not working (UK)... Downdetector reporting various Microsoft linked services are out (Minecraft, Microsoft 365, Xbox...)
Based on the delay in resolving the issue, it appears MC attempted to rehire some of the DevOps engineers whom AI had previously replaced.
They probably hired the ones AWS laid off, causing the AWS outage.
Institutional knowledge matters. Just has to be the right institution is all.
Language models aren't perfect; they can still generate similar outputs. Invertibility is a stretch.
LinkedIn has been acting funny for an hour or so, and some pages in the learn.microsoft.com domain have been failing for me too...
Oh, well, I'm sure Azure will be given the same pass that AWS got here recently when they had their 12-hour outage...
Apologies, but this just reads like a low effort critique of big things.
To be clear, they should get criticism. They should be held liable for any damage they cause.
But that they remain the biggest cloud offering out there isn't something you'd expect to change from a few outages that, by most all evidence, potential replacements have, as well? More, a lot of the outages potential replacements have are often more global in nature.
Have people left GitHub due to the multiple post-acquisition outages? That is a pass if you don't judge it the same way.
Well, they have successfully locked their customers captive thanks to huge egress fees.
GitHub runners (specifically the "larger" runner types) are all down for us. These are known to be hosted on Azure.
This probably explains why paying for street parking in Cologne by phone/web didn't work (eternal spinner) then
So that’s why CapitalOne is out today. Even though their (incorrect) status page says all systems operational.
Our Azure DevOps site is still functioning and our Azure hosted databases are accessible. Everything else is cooked.
pretty interesting how datadog's uptime tracker (https://updog.ai/) says all the sites are fully available.
if that's true then it's a sign that Azure's control / data plane separation is doing it's job! at least for now
That saying is just as alive today as it ever was.
Yeah, Azure is a mess today. Can't do anything without the portal.
Yeah, I have non prod environments that don't use FD that are functioning. Routing through FD does not work. And a different app, nonprod doesn't use FD (and is working) but loads assets from the CDN (which is not working).
FD and CDN are global resources and are experiencing issues. Probably some other global resources as well.
Hate to say it, but DNS is looking like it's still the undisputed champ.
Earnings report today. A coincidence?
I can at least login to Azure. But several MS sites are down.
Down in Sweden Central as well (all our production systems are down)
Ahh it got me, Alaska air web site has an Azure outage banner
Yudkowsky's feared Superintellignece holding Azure hostage
yep having trouble logging into https://entra.microsoft.com/ as well
what's happening? self hosting advocate groups attacking all cloud to prove their point?
Yeah the graph for that one looks exactly the same shape. I wonder if they were depending on some azure component somehow, or maybe there were things hosted on both and the azure failure made enough things failover to AWS that AWS couldn't cope? If that was the case I'd expect to see something similar with GCP too though.
Edit: nope looks like there's actually a spike on GCP as well
they recently had an incident with front door reachability, wonder if it's back.
QNBQ-5W8
AWS, now Azure - wasn't this a plot point in Terminator where SkyNet was causing computer systems to have issues much before it finally become self-aware?
Funnily enough, AI has been training on its own data as generated by users writing AI conversations back to the internet - there's a feedback loop at play.
When you look at the scale of the reports, you find they are much lower than Azure's. seeing a bunch of 24-hour sparkline type graphs next to each other can make it look like they are equally impacted, but AWS has 500 reports and Azure has 20,000. The scale is hidden by the choice of graph.
In other words, people reporting outages at AWS are probably having trouble with microsoft-run DNS services or caching proxies. It's not that the issues aren't there, it's that the internet is full of intermingled complexity. Just that amount of organic false-positives can make it look like an unrelated major service is impacted.
As of now Azure Status page still shows no incident. It must be manually updated, someone has to actively decide to acknowledge an issue, and they're just... not. It undermines confidence in that status page.
From Azure status page: "Customers can consider implementing failover strategies with Azure Traffic Manager, to fail over from Azure Front Door to your origins".
What a terrible advise.
Meanwhile the layoffs continue https://www.entrepreneur.com/business-news/microsoft-ceo-exp...
I especially like how Nadella speaks of layoffs as some kind of uncontrollable natural disaster, like a hurricane, caused by no-one in particular. A kind of "God works in mysterious ways".
> “Microsoft is being recognized and rewarded at levels never seen before,” Nadella wrote. “And yet, at the same time, we’ve undergone layoffs. This is the enigma of success in an industry that has no franchise value.”
> Nadella explained the disconnect between thriving financials and layoffs by stating that “progress isn’t linear” and that it is “sometimes dissonant, and always demanding.”
I've read the whole memo and it's actually worse than those excerpts. Nadella doesn't even claim these were low performers: > These decisions are among the most difficult we have to make. They affect people we’ve worked alongside, learned from, and shared countless moments with—our colleagues, teammates, and friends.
Ok, so Microsoft is thriving, these were friends and people "we've learned from", but they must go because... uh... "progress isn't linear". Well, thanks Nadella! That explains so much!> [Satya Nadella] said that the company’s future opportunity was to bring AI to all eight billion people on the planet.
But what if I don't want AI brought to me?
Real life Pluribus https://en.wikipedia.org/wiki/Pluribus_(TV_series)
The outage impacted GitSocial minor version bump release: https://marketplace.visualstudio.com/items?itemName=GitSocia...
There's no way to tell, and after about 30 minutes, the release process on VS Code Marketplace failed with a cryptic message: "Repository signing for extension file failed.". And there's no way to restart/resume it.
Reports of Azure and AWS down on the same day? Infrastructure terrorism?
Reports of Azure and AWS down on the same day? Infrastructure terrorism?
> We have confirmed that an inadvertent configuration change as the trigger event for this issue.
Save the speculation for Reddit. HN is better than that.
> Infrastructure terrorism?
Unless that's a euphemism for "vibe coding", no.
According to downtector.com - both AWS and GCP are down as well. Interesting
The Internet is supposed to be decentralized. The big three seem to have all the power now (Amazon, Microsoft, and Google) plus Cloudflare/Oracle.
How did we get here? Is it because of scale? Going to market in minutes by using someone else's computers instead of building out your own, like co-location or dedicated servers, like back in the day.