The conversation is straightforward, but the problem behind it is not. The customer bought servers in 2017 and typically refresh every five to six years. Generally, around the 2022 to 2023 timeframe, they would have looked to buy new.
Historically, that is what would have happened. But COVID hit, and there were supply chain constraints during COVID. The original end-of-life notice that would have landed around 2023 was extended: 2026 for general software updates and 2028 for security vulnerabilities.
That gave the customer roughly ten years of life on that server platform, which means middle school is right around the corner for this little guy, and this is a healthcare environment.
As soon as COVID let up, they should have refreshed then. They did not. Fast forward to the present, they asked us to walk through a design and bill of materials, and now we are in the middle of an unprecedented supply chain constraint, where we cannot get equipment to them because of what’s happening with AI chip manufacturing and hyperscalers.
It was going to take eight to ten months. On top of that, cost was more than it would have been last year because COGS have increased tremendously.
That puts them in a position where buying new is outside their budget. But even if they could afford it, they still would not get equipment for maybe a year. Then they would have to work through actual deployment and migration. That puts them close to 2028 when security vulnerability support ends, while certainly pushing them beyond 2026 for general software updates.
That is not even counting operating system support lagging on these servers because of their age. Later versions of VMware are not supported, including VCF 9, and Broadcom is strongly encouraging customers to make the move. So, they are between a rock and a hard place with no clean options.
The CTO asked, “What are we supposed to do? I can’t believe you are doing this to us.”
More than anything, I want to help them. But there is nothing we can do to help them in the way they want to be helped.
We talk a lot about how age is not a good proxy for risk, and that is true. So now we are trying to go through and de-risk where we can and look for vulnerabilities that we can patch. Then there are the things we cannot patch or cannot do anything with. For those, we must explore options like purchasing new or bridging to cloud when we cannot get new hardware in time and compliance requirements allow.
It puts the customer in a hard position, and there are no clean answers for that. So, if there is no clean answer, the next best move is to reduce uncertainty.
Build the inventory and map the exposure
Reality is, you cannot assess risk if you do not know your assets, and most CMDBs have gaps.
How you get that inventory depends on what you already have. If you are using a vulnerability scanner like Nessus, Qualys or Rapid7, you likely have this data. Export it to a CSV, and now you are half done with the assessment.
If you do not have a scanner, Greenbone OpenVAS is free, open source and runs in Docker or on a VM. One scan gives you host platforms, mapped CVEs with severity scores and a structured output.
If you prefer something a little lighter, Nmap is still the standard. You want to run it with service version detection and XML output against your own network ranges. That way you get active host IP addresses, open ports and service banners.
runZero offers a free tier and generally handles device fingerprinting better than Nmap, especially for things like network appliances and storage controllers.
Any of these paths gets you to the same place: structured inventory, hostnames, platforms, versions and enough detail to look up what is vulnerable.
Now, end of life is when the vendor stops selling a product. End of support is when the vendor stops issuing things like security patches. That is the date that determines your exposure. Once a platform crosses that line, the CVE list grows permanently and the patch list stops.
There is a free resource, endoflife.date. It’s a community-maintained database covering hundreds of platforms with lifecycle dates and a public API. For anything else, check vendor lifecycle pages.
The output is your inventory with end-of-support dates attached and a flag on every asset that has crossed its support boundary.
For every flagged asset, the next step is finding out what is truly exploitable. You can have a software version that is included in a CVE, but it’s been hardened by the OEM and not actually exploitable.
If you are working from Nmap or doing a manual inventory, there are two databases you need to know about: NIST’s National Vulnerability Database and CISA’s Known Exploited Vulnerabilities catalog.
The difference between a system with 40 CVEs and no KEV entries versus a system with 12 CVEs and 3 KEV entries is the difference between manageable risk and active danger. Equipment age does not tell you which one you are looking at, which is why we need the CVE profile.
Find it, score it, fix it
Now we use a weighted formula to score every asset.
The formula I use is KEV count times 20, plus highest CVSS times 4, plus months past end of support, plus bonuses for high data sensitivity, internet-facing exposure and assets that cannot be upgraded to post-quantum cryptographic standards. Adjust the weights to your organization’s risk appetite.
This approach aligns with CISA’s Stakeholder-Specific Vulnerability Categorization framework, which prioritizes exploitation status and mission context above overall severity scores. The specific weights are tunable. The principle that KEV entries outweigh CVSS severity and CVSS outweighs age, is the part that stays consistent.
The age-based queue had them backwards. The risk-based queue puts them in the right order and into three buckets.
- Tier 1: immediate action required. These are assets past end-of-support with KEV catalog entries, especially in regulated environments or handling sensitive data. These have known and actively exploited vulnerabilities with no patches coming. In most regulatory frameworks, defending a risk acceptance position on these without compensating controls like network segmentation, WAF or IDS is difficult and must include remediation on a defined timeline.
- Tier 2: managed risk with documentation. These are assets past end-of-support with CVE counts but no current KEV entries, or assets approaching end-of-support within 12 months. Document the risk acceptance position: who signed off, under what conditions and for how long. The absence of that documentation is itself a finding in most compliance frameworks.
- Tier 3: monitored. This is everything still within their support window, receiving patches, with manageable profiles. These belong in the planning timeline with no immediate action. The key here is ensuring their end-of-support dates are visible in the infrastructure calendar to avoid them becoming Tier One assets through inattention.
Last layer, NIST finalized post-quantum cryptographic standards in 2024, and not all legacy hardware can support the new algorithms. Some replacements will be driven by cryptographic migration requirements independent of the CVE profile.
Do not skip post-quantum. Harvest now, decrypt later is real.
What you walk away with
Once you complete the assessment, you are left with three things that change the planning conversation.
First, you have a prioritized refresh queue that is sequenced by risk rather than age. That answers the question of where we spend first, and that is defensible analysis.
Second, you get a documented risk acceptance position for everything you are choosing not to refresh right now. This is the compliance instrument most organizations are missing. It names the asset, the exposure profile, the business justification and who signed off.
Third, you get a refresh sequence that auditors, leadership and your own team can defend. At some point, a CISO, board member or auditor will ask why a particular system was still running. The answer cannot be, “Well, it’s not in middle school yet.” The answer is documented, it is risk-informed and it is tied back to real data.
If you want the refresh queue to stay current as new CVEs and vulnerabilities are discovered, you can deploy a platform like Wazuh that cross-references your assets against CVE databases automatically. Then this one-time assessment becomes a periodic process that is fed by that ongoing stream.
Today, you walk away with a starting point that any team can execute without external consultants or significant budget. Most companies that run through it find at least one piece of the picture they did not have before, and that is usually enough to change the order of the queue.
In an environment where refresh budgets are tight and timelines stretched, the order of the queue matters most.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?