Comment by sai18
Absolutely true, and the challenge is that a large portion of modernization projects fail (around 70%).
The main reasons are the loss of institutional knowledge, the difficulty of untangling 20–30-year-old code that few understand, and, most importantly, ensuring the new system is a true 1:1 functional replica of the original via testing.
Modernization is an incredibly expensive process involving numerous SMEs, moving parts, and massive budgets. Leveraging AI creates an opportunity to make this process far more efficient and successful overall.
IMO, not the best use case for LLMs.
COBOL projects have millions of lines of code. Any prompt/reasoning will rapidly fill the context window of any model.
And you'll probably have a better luck if you had tokenization understands COBOL keywords.
You probably have better luck implementing a data miner that slowly digests all the code and requirements into a proprietary information retrieval solution or ontology that can help answer questions...
What an engineer tells you can be inaccurate, incomplete, outdated, etc.