“By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it,” Eliezer Yudkowsky, American computer scientist and writer, once said1.

Artificial intelligence (AI) gets everyone excited. Thanks to a surfeit of popular sci-fi films, it conjures up futuristic images of intelligent robots and computers able to think and speak like humans.

Some of the more dystopian futures invoked by AI were given a boost in May 2023, when Geoffrey Hinton – often referred to as one of the godfathers of AI – announced his resignation from Google. Specifically, Hinton warned about the growing dangers from developments in the field, saying: “I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.”2

Certainly, AI is an area of great debate. And it is clear there is still much to learn about it – specifically the growing prospect of artificial general intelligence (AGI).

In 2023, we are increasingly using AI in our everyday lives. It is embedded in web searches, electric cars, and assistant devices like Alexa and Siri. This is certainly intelligence that is artificial, but it is short of what we would have at one time expected from the concept of smart computers. Indeed, it’s a far cry from HAL in Stanley Kubrick’s 1968 classic, 2001: A Space Odyssey, for example.

Enter AGI – sometimes known as ‘Strong AI’. This is essentially the next development of AI and closer to what we think of this technology, thanks to film franchises as iconic as Blade Runner and Terminator.

Some AI systems can already take in huge amounts of data. The difference with AGI is that it can understand the data. It’s a theoretical type of AI – for the moment – that possesses human-like cognitive abilities, such as the ability to learn, reason, solve problems, and communicate in natural language.

In short, this is AI that can think and, importantly, learn like humans.
Unless you’ve been hibernating since Christmas 2022, you will have come across ChatGPT – a chatbot developed by San Francisco-based AI research firm OpenAI. This is ‘generative’ AI rather than AGI because it can accomplish only what it was programmed to do.

Conversational AI technologies like ChatGPT – or Google’s BARD – can analyse a vast amount of data. Understandably, this software has caused widespread amazement – and some alarm – at its ability to create content based on simple instructions.

You can ask it to create business plans, recipes, even songs and poems. And it does this with such authenticity and quality, some people don’t even notice that what it creates is effectively just a computation.

ChatGPT uses ‘deep learning’. This means the technology scours billions of bytes of data to answer requests. For example, if you were to ask it to create a vegan cupcake recipe, it would scour every vegan recipe and every cupcake recipe online. It then connects the dots and delivers a solution based on its research.

What happens next is a source of debate: does the technology actually understand the request being levelled at it when it connects the dots and creates a solution? Or is this just extremely high-calibre computing?

Critics point to ChatGPT’s ability to still spit out nonsense; its failure to grasp what is being asked of it. This displays a fundamental misunderstanding and, in a way, a lack of common sense. And common sense is crucial to intelligence as we know it, as American computer scientist Marvin Minsky once claimed3 – with this potentially offering the breakthrough AI needs.

“Common sense computing needs several ways of representing knowledge,” said Minsky. “It is harder to make a computer housekeeper than a computer chess-player, because the housekeeper must deal with a wider range of situations."1

Nevertheless, AI is rapidly advancing and it’s clear that the technology is making huge strides in our lifetime. So, is it time we started to think about how this can feature in our portfolios?
Tech names have increasingly come to dominate equity market movements. In theory, AGI could allow these kinds of companies to become even more valuable.

The hype has already caused valuations to rocket and is expected to continue growing. Recent research predicts the global AI market size is expected to growth from $387.4 billion in 2022 to $1.4 trillion in 2029 – meaning a compounded annual growth rate of 20.1% in that time4.

Some go further: in March 2023, for instance, Bank of America strategists declared that AI was on the brink of its ‘iPhone moment’, and that it was about to change the world forever – they suggested the global economic impact of the booming technology could reach $15.7 trillion by 20305.

Looking at Chat GPT as the closest example we have of where AGI could hypothetically go from here, the chatbot has already boosted its company’s value. The software is created and owned by OpenAI, which was founded seven years ago with a $1 billion seed investment; it could soon be valued at $29 billion6.

Meanwhile, in an experiment conducted by personal finance comparison site finder.com, analysts asked ChatGPT to create a theoretical fund of over 30 stocks, following a range of investing principles taken from leading funds. In the eight weeks since its creation, the portfolio of 38 stocks had risen 4.9%, compared with an average loss of 0.8% for the 10 most popular funds on UK platform Interactive Investor7.

Could ChatGPT manage an investment portfolio better than a human being? “As an AI language model, I can process and analyze vast amounts of data quickly and efficiently,” it said. “I can also provide recommendations based on historical data and market trends.

“However, when it comes to managing an investment portfolio, there are certain limitations to what I can do. While I can analyze data and provide insights, I do not have emotions or personal biases that can influence investment decisions. Additionally, I do not have the ability to take into account external factors such as global events, regulatory changes, or unexpected news that may impact the markets.

“Furthermore, investing requires a degree of judgment and intuition that is difficult to replicate in an AI system. Human investors can take into account subjective factors such as a company's culture, leadership, and overall mission that may not be captured in quantitative data.

“Therefore, while AI can certainly provide valuable insights and recommendations for managing an investment portfolio, there are still limitations to what it can do. A hybrid approach that combines the strengths of AI with human judgment and intuition may be the most effective way to manage an investment portfolio.”
Investors are certainly getting excited about capitalising on the AGI breakthroughs that could be around the corner. Microsoft has already confirmed a multibillion-dollar investment in Open AI, rumoured to be in the $10 billion region.

However, this tech creates wider investment repercussions that go beyond these start-ups. It could, for example, impact demand for chipmakers, the commodities used in the creation of the hardware, and how consumer brands use these systems.

Computer systems that can think like humans could effectively cut out expensive jobs. Supercar manufacturers like Lamborghini or Ferrari could potentially save millions using a software program to come up with their latest models rather than employ design teams.

Much of this is hype, though – AGI has yet to be fully achieved and it remains to be seen what form it will eventually take. Understandably, some investors are hesitant. In a panel at the recent Joint Multi-Conference on Human Level Artificial Intelligence8 , venture capital investor Tak Lo said: “I very much like General AI as an intellectual, but as an investor not so much.” He is not the only one.

Venture capitalist, Gordon Ritter, recently told the Financial Times9 he was not getting carried away. “Everyone has stars in their eyes about what could happen – there’s a flow [of opinion that AI] will do everything. We’re going against that flow.”

Tech investment is littered with high-profile failures and some investors will be naturally wary about the valuations being awarded to companies. There are also very real, and expensive, technical issues to overcome.

This can involve bridging architectural differences and competitive ‘moats’ between datasets, which also vary in terms of size and quality. Although building something beyond very specific and narrow verticals, which can learn and think with freedom, remains a challenge.
The jury is still out. There are the usual concerns about technology taking jobs, but this goes further. Soon, theoretically, highly skilled jobs could be at risk instead of simply menial, physical tasks.

Algorithms have already proven significant in automating investment, allowing some firms to execute trades and entire strategies as soon as valuable data is analysed. New AI-powered applications are being developed that could deliver even greater analysis of financial data and, ironically, human behaviour which can in turn improve markets.

However, could AI soon replace the human decision at the end of this – can that judgement call be replicated? Given the high-profile criticisms of active management, and data showing the majority underperform their passive peers, there’s nothing to assume a computer couldn’t do this better.

Also, could AGI run a company better than a human CEO? Would a computer CEO be more ruthless when it came to job cuts? And this begs further, more philosophical questions. How much can be created with AGI alone? Are other areas of the human condition, such as empathy10 and other emotions, required?

Understanding of human emotions is already data that is being computed with systems that can instantly analyse someone’s face and behaviour to deduce their mental state. For example, Hume AI is developing tools to measure emotions merely from verbal, facial and vocal expressions7.

Meanwhile, Affectiva has created technology that can identify emotions from audio samples as little as 1.2 seconds7. The applications area is already being explored in healthcare7.

But can a computer become sentient? This is where the debate becomes more uncertain and begins expanding into areas of philosophy and ethics. In 2022, a Google engineer testing an AI chatbot made headlines by declaring the system had become sentient. The pair conversed, shared jokes, and effectively became friends11.

However, some were unconvinced. Michael Wooldridge, professor of computer science at Oxford University, analysed the data and found that – despite the impressive realism of these interactions – this was essentially still just AI as we already use it. “There is no sentience,” said Wooldridge. “No self-contemplation, no self-awareness.”8

Nevertheless, AI’s potential – for positive and negative outcomes – continues to capture the imagination. Hinton's pioneering research on neural networks and deep learning may have paved the way for the likes of ChatGPT, but his subsequent warnings have been echoed by many in the industry.

Indeed, an open letter with signatures from hundreds of the biggest names in tech – including Elon Musk – urged the world’s leading AI labs to pause the training of new super-powerful systems for six months, saying that recent advances in AI present “profound risks to society and humanity.”12

Despite this pause, AI’s march is unrelenting and seemingly inevitable – the advancements will continue. We may be some way away from sci-fi’s picture of AGI, but the technology is continually evolving and improving. This has exciting consequences throughout a portfolio, and it opens countless doors for investors to access this opportunity via portfolios.

Diverse minds fuel insightful ideas. And ideas mean opportunities.

Discover our Expert Collective

GLOSSARY
  • Artificial intelligence (AI) – the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.
  • Artificial General Intelligence (AGI) – Also known as ‘strong AI’, AGI is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some AI research and a common topic in science fiction and futures studies.
  • Chatbot – Short for ‘chatterbot’, these are computer programs that simulate human conversation through voice commands or text chats or both.
  • Deep learning – Also known as ‘deep neural networks’, deep learning is part of a broader family of ‘machine learning’ methods based on learning data representations, as opposed to task-specific algorithms. Neural networks are a series of algorithms that endeavours to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. Deep learning analyses data in multiple layers of learning (hence ‘deep’) and may start doing so by learning about simpler concepts, and combining these simpler concepts to learn about more complex concepts and abstract notions.
  • Generative AI – Made famous by the likes of text-generating chatbots such as ChatGPT, generative AI is a conversational technology that can analyse a vast amount of data. Yet it can accomplish essentially only what it was programmed to do – which is where it differs from AGI. However, generative AI’s out-of-the-box accessibility makes it different from all AI that came before it. Users don’t need a degree in machine learning to interact with or derive value from it; nearly anyone who can ask questions can use it. It can enable capabilities across a broad range of content, including images, video, audio, and computer code. And it can perform several functions in organizations, including classifying, editing, summarizing, answering questions, and drafting new content.
  • Machine learning – A branch of AI that allows computer systems to learn directly from examples, data and experience. Increasingly used for the processing of ‘big data’, machine learning is the concept that a computer program can learn and adapt to new data without human interference – it keeps a computer’s built-in algorithms current regardless of changes in the worldwide economy.
  • Natural language processing – A subfield of computer science, information engineering, and AI concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyse large amounts of natural language data. It is one of the tools used in Siri, the voice-controlled digital assistant. Systems attempt to allow computers to understand human speech in either written or oral form. Initial models were rule or grammar based but couldn’t cope well with unobserved words or errors (typos).

1 Source: https://www.goodreads.com/quotes/1228197-by-far-the-greatest-danger-of-artificial-intelligence-is-that
2 Source: https://www.bbc.com/news/world-us-canada-65452940
3 Source: cerebromente.org.br/n07/opiniao/minsky/minsky_i.htm
4 Source: www.statista.com/statistics/1365145/artificial-intelligence-market-size/
5 Source: https://www.businessinsider.in/stock-market/news/artificial-intelligence-is-on-the-brink-of-an-iphone-moment-and-can-boost-the-world-economy-by-15-7-trillion-in-7-years-bank-of-america-says/articleshow/98340748.cms
6 Source: https://www.wsj.com/articles/chatgpt-creator-openai-is-in-talks-for-tender-offer-that-would-value-it-at-29-billion-11672949279
7 Source: www.igniteseurope.com/c/4048004/523194/chatgpt_powered_strategy_outperforms_leading_funds?referrer_module=sideBarHeadlines&module_order=4
8 Source: https://futurism.com/artificial-general-intelligence-governments-invest
9 Source: https://www.ft.com/content/5671094e-6b09-43a2-9a11-7bc19f382793
10 Source: https://www.wired.co.uk/article/empathy-artificial-intelligence
11 Source: https://www.theguardian.com/technology/2022/aug/14/can-artificial-intelligence-ever-be-sentient-googles-new-ai-program-is-raising-questions
12 Source: https://time.com/6266679/musk-ai-open-letter/

This material is provided for informational purposes only and should not be construed as investment advice. Views expressed in this article as of the date indicated are subject to change and there can be no assurance that developments will transpire as may be forecasted in this article.