Listen to the article (AI powered narration)

Published on September 13, 2023

IT has long been in a fascinating position within the organization. Tasked with, as they say, keeping the lights on while also enabling innovation and modernization, the IT department has always had its work cut out for it. The last few years, especially, couldn’t have been easy, as we witnessed one wave of disruption after the other. Be it blockchain or COVID-19, grappling with disruptors has become an everyday job for IT folks. Now, there’s a new disruptor on the block, breaking all the rules and changing the game as we know it: generative AI.

In this ever-evolving realm of technology, amidst the relentless pursuit of operational excellence, cloud adoption has long been considered as a guaranteed path to gaining that much coveted competitive edge. Matters, however, took a turn for the worse during the past year as organizations slashed cloud budgets amidst worldwide economic slowdown, and cloud deals took almost 50% longer than normal to close. But we may be able to anticipate a more favorable turn of events thanks to generative AI. As companies, big and small, fervently rush to capitalize on its benefits, this new-gen disruptor might just be able to revive the cloud market.

The overarching symphony of the cloud and generative AI

If it isn’t clear already, what organizations are slowly coming to realize is that those who rely solely on on-premises tech might be left out of the generative AI revolution. In order to leverage the many benefits of generative technology effectively, the cloud is the way forward. An undeniable game-changer which levels the playing field for all kinds of businesses, the cloud’s scalability, flexibility, and cost-efficiency makes it the best platform for generative AI.

While this breakthrough technology is making headlines across the world, the fact remains that it is still in very nascent stages, which means that it is still evolving and going through multiple iterations every day. Take ChatGPT, for example. When the conversational engine was launched, it was based on the generative pre-trained transformer (GPT) 3. But we had GPT 2 before that, launched in early 2019, which was not even released to the public due to its potential to be misused. This reinforces the fact that new technologies of this nature are bound to be in a state of flux for the foreseeable future. To further add to this mix, the science behind large language models is pretty complicated itself. Languages like Python and Rust and frameworks like Tensorflow and Torch aren’t standard in most corporate environments, and when you put all of this together, you’ll see that what we have is pretty new and complex software with many moving parts, which makes it an unlikely on-premises candidate. Investing in something of this nature, which comes with no guarantees of long-term value, on-premises isn’t ideally the most sound decision for an organization to make.

Before diving headfirst into the world of transformer technology, it is also vital to consider the financial implications. Building and running generative AI models requires specialized hardware investments, notably GPUs. Famous for their significant computational power, GPUs are expensive processors that are imperative if organizations want to run generative technology on-premises. The downside to building such a niche infrastructure on-premises is that organizations will not be utilizing this computational power at all times. Only when a generative-AI-specific service request comes in will this infrastructure be put into use. It also bears mentioning that organizations will also have to incur the added costs of significantly higher energy consumption and cooling requirements. Thus, the cost versus benefit analysis of such an investment looks quite bleak.  From this perspective, the cloud model of paying only when you use the service appears to be a much more feasible option.

Forge ahead, protected against uncertainties

When you take the technical and financial challenges away from this mix, the cloud gives you the opportunity to enable pure innovation. Organizations do not have to worry about the infrastructure requirements, maintenance costs, or how they can keep it up and running; all they need to do is figure out what value they want to extract from generative AI and start running it in the cloud, thereby prioritizing innovation.

However, one factor that organizations should be careful about is ensuring security. Generative AI requires immense amounts of data to be rapidly processed in an environment which might not have clear boundaries or established controls. The nature and quality of data also needs to be monitored along with having clear policies around data jurisdiction. This can cause unprecedented cybersecurity and data privacy gaps which organizations need to take into account before embarking on their generative tech journey. For example, if an organization builds a model specific to a certain customer using their environment’s data, they need to ensure that the training data used for this customer’s requirement is not used for any other customer as that would lead to a privacy breach.

The good news is that most major cloud providers have well-established protocols in place regarding cybersecurity, data residency, jurisdiction, and more. With the advent of generative AI, well-known providers have already or are in the process of launching dedicated services for generative AI.

Generative AI seems well set to accelerate the next wave of cloud adoption. In many ways, this seems like a natural progression after the IT modernization and innovation we’ve been witnessing in the last decade. Organizations which realized the long-term value of the cloud early on and laid the foundation will reap the benefits now, while those who were late adopters will have to step up their game.

While this new world holds limitless potential, we’re yet to see where we’ll land. But one thing is for sure: The future is intelligent and it is in the cloud.

Priyanka Roy

Priyanka Roy

Senior Enterprise Evangelist, ManageEngine

Priyanka Roy is a Senior Enterprise Evangelist at ManageEngine. She takes a keen interest in the business and societal impact of technological advancements in the fields of cybersecurity, data privacy, and artificial intelligence.

As part of her role at ManageEngine, Priyanka has liaised with top analyst research firms such as Gartner, Forrester, and EMA to study the trends that influence the IT management landscape. She closely works with ManageEngine’s technical teams to keep herself updated with the latest innovations disrupting the IT industry.

An English literature graduate with an MBA degree in Marketing, Priyanka’s expert opinions, industry insights, and research material have been featured in numerous publications, including CSO Online, EdTech UK, and Dataversity.

 Learn more about Priyanka Roy
x