Published on March 31, 2021

Although many have opined that artificial intelligence will result in widespread job displacement, this is far from certain. In reality, AI seems far more likely to continue providing workers with a “super-human level of productivity,” automating mundane tasks and freeing up time for workers to focus on more complex projects. Also, other than a few attention-seeking pundits, most AI experts acknowledge that we are extremely far away from artificial general intelligence. In fact, AI-based models work best—and will continue to work best—with humans in the loop.

“Human-in-the-loop AI” is here to stay

Virtually no AI models are correct 100% of the time. Algorithmic decision-making requires a human in the loop to verify the integrity of the data, audit the model, provide explanations for decisions, and adjust the model for unseen phenomena. The recent pandemic offers a good example, as many AI-based models had to be adjusted by humans to account for the abrupt shift from office to remote work. Also, automobiles offer an interesting case study. Despite promises from some boisterous autonomous vehicle executives, it is likely that we’ll continue to have partial automation and human-in-the-loop AI for the foreseeable future.

Fully automated vehicles are still a long way out

According to the Society of Automotive Engineers (SAE), there are five levels of automation—0: No automation; 1: Driver assistance; 2: Partial automation; 3: Conditional automation; 4: High automation, and 5: Full automation. Despite aggressive predictions from Elon Musk and others, the fact remains that we’re still over a decade away from widespread adoption of level-five, fully autonomous driving cars.

We’ve seen breakthroughs from Tesla, Alphabet’s Waymo, and Argo AI—which has partnered with Volkswagen and Ford—however, we’ve also seen the goalposts repeatedly pushed back, as companies dial back claims and recalibrate their timelines. With perhaps the exception of Musk, who currently seeks a Beta version of a fully automated Tesla by the end of 2021, a sober assessment places any kind of large-scale, fully autonomous driving over a decade away. Regulatory issues aside, the technology is not there. Even as we see sensors consistently working in all types of weather, AI-based systems will still struggle with some edge cases; e.g., differentiating between a flock of birds and wind-strewn leaves.

AI-powered adversarial attacks, data poisoning, deepfakes, and blockchain technologies are all on the battlefield

IT security personnel already rely on artificial intelligence to identify anomalous user behavior and potential breaches to one’s network; however, some bad actors have access to similar technologies. Through AI-powered attacks, bad actors may attempt to poison the datasets that train neural networks, or they may attempt to engage in social engineering attacks through the use of deepfake audio. As deepfakes become increasingly realistic, IT infrastructure executives must continue to rely on AI to keep companies’ data safe. We’ve already seen the tech industry launch a bevy of start-up companies devoted solely to verifying the authenticity of audio and video. Also, in order to effectively authenticate the source of a signature, voice, image, or document, businesses will increasingly look to the blockchain. The use of such technologies will likely become mainstream in the battle to authenticate data.

Regulatory agencies will struggle to keep up

As is almost always the case, legislators are reactive. For example, the GDPR doesn’t includes a single mention of artificial intelligence whatsoever; however, California’s recent Privacy Rights Act (CPRA) does make reference to AI, and this will likely become a trend. It is vital that the data within AI models is used as it was intended to be used—and only as it was intended to be used. As an example, a health care organization may effectively use patient data to increase one’s lifespan, but it is important to ensure this data isn’t used for other purposes, such as targeted advertisements or lead generation initiatives. Again, blockchain technologies can help to authenticate the source of data should the data end up being used in an inappropriate fashion. Nevertheless, it will likely be a challenge for the regulators to keep up.


Despite the probable increase in AI-powered cyberattacks and lawmakers’ failure to stay ahead of technological innovation, the future of AI looks bright. Artificial intelligence is here to augment humans’ work lives, and for the most part, it is not going to replace workers. Additionally, as opposed to level-5 autonomous cars and fully autonomous decision-making models, we will continue to see the proliferation of “human-in-loop” AI models. Humans will continue to be involved in the process, explaining decisions, ensuring the models are accurate, and preventing biases—or car crashes.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan