United States (EN)

How AI is Redefining the Landscape of Cloud & DevOps

Paul Jones

Senior Director ,

Cloud & DevOps

Generative AI continues to amaze. Since ChatGPT's launch in November 2022, we've witnessed a technological revolution at a pace not seen since the introduction of the smartphone. In what feels like a blink of an eye, we've integrated tools for content generation and analysis into our daily routines, transforming the way we work.

In Cloud & DevOps, much of the work we do mirrors software engineering. We write code, craft documentation, and develop tests. However, our code doesn’t always result in features but often constructs the very infrastructure of our projects. We've already witnessed how effective this technology can be.

Yet, beyond code generation and analysis, there’s untapped potential. Where can we expand and grow?

Here are five of the ideas we're beginning to explore.

1. Automatically Generated Diagrams – and Diagrams From Code

Diagrams that illustrate software systems are crucial in our role as solution architects. We use them to design, plan, explain our solutions to others and seek approval from subject matter experts before we start to build.

However, these diagrams often lose their relevance after fulfilling their initial purpose. Following an Architecture Review Board session, they may fall by the wayside, failing to accurately depict the dynamic nature of the production environment they were designed to represent.

Contrary to what some may believe, these diagrams are often not pictures, but structured text files (such as XML). As we are well aware, Large Language Models (LLMs) are proficient in managing various types of text. So, to prevent diagrams from becoming outdated, we could use our existing software pipelines to automate their generation from our code. As changes are made to the infrastructure as code or other artifacts, a model could evaluate the differences within the context of our codebase and update the diagrams accordingly. These could then be presented to human reviewers for final edits, approval, and incorporation.

Conversely, this process could also work in reverse. If we consider the adage "a picture is worth 1000 words", the same principle could apply to an LLM prompt. Instead of asking a model to "generate compliant Terraform code" for a specific cloud resource, we could provide a diagram that encapsulates all our resources, their context, and their interconnectedness. This method could potentially yield superior results compared to basic prompt engineering.

2. Automated Compliance & Security Fixes

Within our pipelines, we already have stages dedicated to scanning for security and compliance concerns. Typically, the output is a laundry list of potential improvements that developers need to sift through, assess for relevance, and then manually implement.

This task, often referred to as a developer "chore," is undeniably critical, but doesn't necessarily contribute to the project's functional value.

This is where Generative AI can step in. By leveraging the items on the laundry list and using our codebase as a reference point, it can generate appropriate modifications and present them as a Pull Request to a human reviewer – just like any other human-contributed changes.

This methodology isn't limited to simple requirements either. It can be applied to more complex frameworks such as the AWS Well Architected Framework or the CIS Benchmarks. Both are fundamental to Cloud best practices, yet both span hundreds of pages and demand considerable human effort to comprehend and implement. With Generative AI, this process could be made significantly more efficient.

3. Gather Evidence for Architectural Review

Before an application is launched, enterprise application teams are often tasked with creating comprehensive documentation that verifies the application's compliance with various standards – these include logging, availability, observability, disaster recovery, backups, rollbacks, exit strategies, and more.

The process of assembling this evidence can be time-consuming, potentially taking multiple weeks. In some instances, this documentation is a prerequisite before a project can even be started, creating a hindrance that, at its worst, stifles innovation.

However, with an appropriate architecture in place, an application, fronted by an LLM's capability to process the natural language of the requirements, could collate suitable logs, construct diagrams, and accumulate evidence of test, quality, and code coverage. All of this could be formatted to align with the specific requirements of the enterprise.

In an ideal world, governance decisions would be digitized. Until we get there, Generative AI could help us bridge the gap. At Synechron, we've already had some success with LLM applications that generate similar types of documents, but in business rather than technical settings. A similar solution architecture would be effective for this problem, too.

4. Automated Upgrades

Modern software relies heavily on numerous external dependencies. The most rapidly expanding – and concerning – security threat comes from the Software Supply Chain. Attacks that "poison the well," or compromise the dependencies our software relies on, are on the rise at an alarming rate.

Existing solutions, like Dependabot from Github, already monitor changes in dependencies and suggest upgrades to newer versions.

However, there are instances where a simple version upgrade doesn't suffice – the code itself needs to be adjusted to accommodate these changes. This also falls under the category of a "chore" that, in the interest of improving Developer Efficiency, we are keen to get automated.

5. Risk Assessed Release Notes

Every party involved in a software program seeks assurance that a release is secure. The rate of failed releases is one of the four key performance indicators for DevOps team effectiveness.

Emerging standards such as Conventional Commits are admirable initiatives - they simplify the process of assessing the content of a release. Does it introduce major new features, or just minor bug fixes? Could any component be categorized as a breaking change? The downside is that these standards rely on developer self-certification, and as we all know, humans are not always the best at predicting computer behavior.

Generative AI would be, at minimum, a useful backstop. A well-tuned model could evaluate the differences between two versions of an application and make a judgement on its safety. Does the change induce shifts in data? Is the modified code thoroughly tested? Is it encapsulated by a feature flag that allows for easy deactivation should it cause an issue?

By employing this technology, we could generate automated release notes that not only list features and bug fixes, but also assign a risk grade to each. Such a list could supplement existing static code analysis tools and bring more significance to often-misused metrics like test coverage.

Once the application team is satisfied, the compliance and change control teams will have a more comprehensive understanding of the release that they're asked to approve.

Generative AI has evolved extremely quickly, but there is plenty more to come. We look forward to what lies ahead and the capabilities we'll be able to harness.

The Author

Rachel Anderson, Digital Lead at Synechron UK
Paul Jones

Senior Director

Paul Jones is a Senior Director and Cloud & DevOps practice lead at Synechron. He is a Digital Transformation expert and a cloud implementation specialist (Google Cloud Platform, Amazon Web Services).

You can get in touch on: LinkedIn or via email

Synechron’s Cloud and DevOps practice partners with AWS, Azure and GCP to revolutionize the way technology is delivered across the financial services industry. We provide an array of comprehensive services, ranging from enterprise strategy to DevSecOps and large-scale application modernization.

See More Relevant Articles