Artificial Intelligence

AI predictions for 2024

Published on December 21, 2023

It’s that time of year again — predictions season. And we’ve got some spot-on AI predictions for 2024.

With upcoming elections in the E.U., U.S., and Ukraine, as well as the Summer Olympics in Paris, there is a great deal of consternation surrounding AI-fueled attacks in the upcoming year.

Be it emerging AI spear phishing tools, AI-based vishing, AI-generated code, or AI-powered security tools, AI will be used aggressively by organizations and bad actors alike. AI encapsulates many different issues; however, for the upcoming year, here are several AI-related predictions that particularly deserve our attention.

Generative AI won’t replace that many jobs.

Despite the perpetual fearmongering, generative AI (genAI) isn’t going to upend the workplace as we know it. In fact, according to Forrester’s 2023 Generative AI Jobs Impact Forecast, genAI is only expected to replace 1.5% of jobs (roughly 2.4 million) by 2030. They suspect that many more (6.9% or 11.08 million) will be influenced or augmented. This makes a lot of sense.

When it comes to genAI, there are still many obstacles to tackle. Until questions around copyright issues, plagiarism, hallucinations, and model biases are answered, we won’t see many jobs lost to genAI.

That said, certain industries are more vulnerable to genAI-induced job displacement than others.

The occupations most at risk of being replaced are found in Forrester’s upper-right quadrant, which they describe as a “high genAI influence” and “easier-to-automate.” This quadrant includes jobs such as proofreaders, paralegals, technical writers, social science research assistants, statistical assistants—and to a lesser extent—legal assistants, paralegals, and archivists.

Some occupations are equally influenced by genAI, but much harder to automate. For example, computer programmers, poets, creative writers, and editors fall into such a category. Although these folks are far less likely to be replaced by genAI, their work is still likely to be augmented by genAI.

AI will play a prominent role in evasion techniques, spear phishing, and predictive security.

In 2024, it will be vital to stay on the lookout for AI-fueled evasion techniques. Bad actors will use adaptive malware and polymorphic ransomware to exploit zero-day vulnerabilities. Also, we can expect to see more attacks from AI-generated code; sales of AI spear phishing tools are already on the rise on the dark web, and AI-based vishing is on the horizon.

Of course, AI won’t only be used for evil. It will expand into predictive security analysis as well. Organizations will, and should, use AI-powered tools to enrich their risk assessments with contextual data.

We can expect to see more domain-specific LLMs.

Over the next twelve months, we will see large language models (LLMs) being divided into specific domains in order to garner more precise interpretations of analyzed data. Domain-specific LLMs will help organizations to better process industry-specific jargon, which in turn will enable the models to better understand and interpret user inputs.

And there will be a rise in task-specific, purpose-built language models.

General-purpose LLMs have been dominating news headlines for the past year, and this will certainly continue into 2024. However, in the coming year, we will also hear more chatter about the benefits of purpose-built LLMs—more narrow, task-specific models. Given the high expenditure—and often uncertain ROI—that comes with building out LLMs, it’s no wonder that organizations are looking to smaller, more purpose-built models. These purpose-built models leverage domain-specific information, provide contextually relevant results, and frequently cost far less.

Besides, not every use case needs an LLM. Purpose-built models are more than adequate for certain use cases, such as user behavior analysis, document analysis, or predictive maintenance.

Deepfake audio and video will facilitate election-related misinformation—and it poses the even more significant risk of stock market manipulation.

Many are worried about election cyberattacks, misinformation, and election-related deepfakes being disseminated on social media. And to a degree, they are right to worry. I was particularly worried about deepfakes’ potential to compromise the 2016 US election. Fortunately, that wasn’t a big issue that year. That said, deepfake technologies have progressed substantially since then, and we will see convincing election-related synthetic media proliferating across social media in 2024.

Nevertheless, election-altering deepfakes are not at the very top of my list of concerns. When it comes to deepfake audio and video attacks, an arguably greater danger lies in bad actors’ ability to use synthetic media to shake capital markets and move stocks.

A stock market deepfake attack is arguably more dangerous than a political deepfake attack due to the damage that can be inflicted by the time the deepfake is exposed. For example, a synthetic video of a presidential candidate may be revealed to be bogus in a matter of hours. Unless the video spreads on the morning of election day, the damage can likely be kept to a minimum. Now, this is most definitely not the case when it comes to stock market manipulation. If a convincing deepfake video of a CEO of a publicly traded company were to spread across social media, billions of dollars of market cap could be manipulated in no time at all.

More synthetic media legislation is on the horizon.

Although no federal deepfake legislation has passed, there has been a flurry of activity at the state level. Opting not to wait for Congress and the Federal Elections Committee to act, several states—Texas, California, Minnesota, and Washington—took it upon themselves to pass laws addressing synthetic media in election-related ads. Then last month, Michigan Governor Gretchen Whitmer signed a bill requiring a disclosure for all political advertisements that use AI.

Similar legislation has been put forward in Illinois, Kentucky, New Jersey, and New York. Given how fast the technology is improving, it is no surprise that US Congresspeople are beginning to take political deepfakes seriously. Without a doubt, we can expect to see more states pass election-related synthetic media legislation in the upcoming year. It’s definitely a serious issue to watch in 2024.

As a final note, this article was not written by an AI. What’s more, for the foreseeable future, most of the content you’ll be reading will be generated by humans.

Leave a comment

Your email address will not be published. Required fields are marked *

+ 60 = 62