Subject matter expertise is key to AI success and job creation

A woman uses AI-based Interactive Design Assistant for Fashion (AiDA). November 4, 2021. Laboratory for Artificial Intelligence in Design/Handout via Thomson Reuters Foundation
opinion

A woman uses AI-based Interactive Design Assistant for Fashion (AiDA). November 4, 2021. Laboratory for Artificial Intelligence in Design/Handout via Thomson Reuters Foundation

Why large language models won’t necessarily displace workers but instead increase demand for subject matter experts to determine the value and accuracy of AI-generated output

Jennifer Cheung is an AI ethicist for Digital Catapult.

Science fiction stories have always contained motifs about machines working alongside humans. The recent hype around foundational models and large language models (LLMs) has only accelerated this idea into a plausible reality.

AI-solutions have been deployed across multiple industries, improving operational efficiency and employee performance. The case for AI as a productivity tool remains strong, but questions remain over how this technology will influence employment opportunities.

It’s critical to understand however, that AI will increase the need for subject matter experts in the long-term, which will in turn facilitate job creation and ensure the technology’s long-term success. 

What do we actually want to automate?

AI is not always the answer, and we must be mindful of what a ‘job’ or ‘role’ actually entails.

The launch of a new AI system that wrote long-form stories received backlash recently, with many arguing that such a tool missed the point of writing. It was argued that there were numerous other tasks that writers might have preferred to have automated, but instead, were proposed a tool that replaced the main purpose of writing, where their expertise existed and what they most enjoyed. 

Founders and developers must reflect carefully on which tasks should be automated.

By automating less creative and repetitive tasks, humans will have more time to utilise their innate strengths and expertise such as creativity, navigating ambiguity, making judgements, engaging in cognitive thinking, perceiving nuances and connecting with others. While AI can be valuable for menial and formulaic tasks, anything more substantial requires thoughtful analysis, evaluation, and serious consideration.

At Digital Catapult, I try to encourage founders to consider these aspects, which are critical for the successful adoption of AI in business and society as a whole. Without LLMs being developed to assist humans in their areas of expertise, founders and developers risk inhibiting the success of their innovation. 

Sensors are seen mounted on the windshield of a self-driving car during a self-racing cars event in Willows, California, U.S., April 1, 2017. REUTERS/Stephen Lam
RelatedCities draw up AI policies as US federal laws lag behind
RelatedPolitical cheap fakes are a blind spot for platforms in the Global South
RelatedBesides AI, regulation key to fight mis/disinformation

Not just ‘human in the loop’, but human-driven

There is a lot of buzz around the impact of AI tools for knowledge workers specifically. After all, tasks such as performing searches and queries, analysing data, and writing are all things that foundational models can now do. With that being said, AI tools can be biased, slow to adapt, and lack context.

Take content moderation for example. Content moderation is tricky, as moderators need to have historical, political, and social context, linguistic knowledge, cognitive thinking to make sense of memes, formats, mediums, misspellings, and understanding of specific communities and domains. 

While social media platforms are increasingly reliant on AI tools for content moderation, these tools are still far from accurate. They also don’t work nearly as well for non-English speaking communities. This leads to some forms of speech being under-moderated, and others being over-blocked.

These effects are compounded by the fact there is a lack of adequate investment and resources for human reviewers and subject matter experts made by these social media platforms. Whilst moderation is a specific example, these pitfalls still apply to other occupations and industries.

Subject matter experts are incredibly important in order to first build these tools effectively and responsibly, and then to assert control and oversight when they are deployed, demonstrating the demand for specialised roles within this field. 

From job obsolescence to new career horizons 

Science and technological advancements have rendered certain jobs obsolete, but they have also created new opportunities. It’s evident that the advancements in AI and LLMs have the potential to create new job prospects and increase the demand for subject matter experts.

In recent times, some companies have announced plans to optimise their workforce by leveraging technologies like AI, and while these announcements may indicate a reduction in certain roles, it is essential to focus on the bigger picture.

Companies are increasingly ensuring that their workforce is equipped with the necessary skills to adapt to the changing requirements of their industry. In fact, many are investing in upskilling and reskilling programmes to navigate the shifting landscape and to explore the new opportunities presented by AI.

It’s important to recognise that while AI and technological advancements may lead to job transformations, they also open avenues for individuals to explore new roles and careers. The evolving nature of technology calls for continuous learning and adaptation, with subject matter experts playing a crucial role in leveraging the potential of these advancements and ensuring the tech’s long-term success.

Subject matter expertise is a key determinant of the long-term success of AI. While AI can enhance productivity, its impact on jobs remains uncertain, and equitable outcomes are not guaranteed. Collaborative efforts are essential to upskill and adapt the workforce.

By combining human expertise with AI capabilities, we can navigate the complexities of automation and ensure a future that benefits both workers and society at large.


Any views expressed in this opinion piece are those of the author and not of Context or the Thomson Reuters Foundation.


OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration

Part of:

AI and jobs: What does it mean for workers’ rights?

As artificial intelligence tools like ChatGPT reshape work, here's our collection of stories on what AI means for workers' rights

Updated: August 22, 2023


Tags

  • Tech and inequality
  • Future of work
  • Tech regulation
  • Innovative business models
  • Tech solutions


Get our data & surveillance newsletter. Free. Every week.

By providing your email, you agree to our Privacy Policy.


Latest on Context