We investigate the latest wave of generative artificial intelligence (AI) and its implications for investors.

Book, Publication, Wood

In our March edition of Double Take, we delved into the ever-evolving world of generative AI. Since the launch of Open AI’s ChatGPT earlier this year, the media has been buzzing about the cutting-edge technology and its potential to change life as we know it. Large multinational technology corporations are pouring billions of dollars into companies developing large language models that power innovative chat-based AI systems and solutions. We turned our investigative lens to the potential risks and opportunities of investing in these early days of this nascent AI technology.

To better understand the evolution of the AI market and what the latest developments could mean for investors, we spoke to Rob May, an AI angel investor and entrepreneur. May likens the introduction of ChatGPT to the first iPhone in 2007, which revolutionised smartphone technology with its user-friendly interface.

There were smartphones before the iPhone, and the iPhone came out with this super intuitive interface that really brought mobile phones to the masses. But you could have had a Blackberry before. There were all kinds of Nokia phones and Motorola phones…ChatGPT is really a user interface innovation on what was an existing OpenAI model that just made it really easy to play around with.

Rob May, AI entrepreneur and angel investor

Seemingly, ChatGPT became ubiquitous overnight. However, ChatGPT—along with most of today’s generative AI technology—was spun off from GPT-3, a large language model that launched in 2020 and represented a paradigm shift in the evolution of AI. If GPT-3 is anything like the base technology of smartphone companies back in the early 2000s, ChatGPT could be the next iPhone, a product innovation that transformed the way we live and work.

Davis Sawyer is the co-founder and chief product officer at Deeplite, a Montreal-based firm focused on developing cost-effective AI for consumer products. Sawyer explains that GPT-3 uses a deep learning algorithm that can process massive data sets, making it capable of a myriad of tasks with extraordinary levels of accuracy. The industry has dubbed AI models of this size and functionality foundation models.

They are 10,000 times bigger than any model we’ve seen before. And when you say bigger, you’ve got to think in terms of memory, so compute memory to store and train and actually have these models available…These hundred billion parameter models with hundreds of billions of sources of data have now enabled us to have more accurate models than ever, and that’s what’s predicated this whole buzz.

Davis Sawyer, co-founder and chief product officer at Deeplite, chairperson at tinyML Foundation

While narrow artificial intelligence models are programmed to perform a single task (e.g. website chatbots and smartphone facial recognition technology), foundation models are trained with a vast amount of far-reaching data and can transfer knowledge from one task to another. This type of wide-ranging neural network can be trained once and then honed for different functions.[1]

According to May, foundation models were developed much faster than many industry experts anticipated. For this reason, he cautions prospective investors to recognise that the AI ecosystem could change considerably in the next three-to-five years, and even more so over the next two decades. His concern is that the companies stemming from today’s emerging technology (and employing high levels of capital expenditures) could become irrelevant as the models continue to develop. Changes in the space are happening relatively quickly, as GPT-4—the newest iteration of the large language model— was released shortly after we recorded this podcast episode.

One of the core tenets of today’s foundation models is that the amount of training data fed into the model is positively correlated with the predictive power of that model—in other words, a greater amount of training data should yield a more predictive model. In the ever-evolving AI ecosystem, May questions the staying power of that assumption.

Those of you that have children know that you don’t have to show your kids 8,000 versions of a coffee cup before they know what a coffee cup is. You show them like three coffee cups and they’re like, ‘Okay, I got it. That’s a coffee cup.’ And so, this idea that you need a lot of data to train AI models, a lot of people believe that at some point it’s going to go away. So when that happens, the foundation models could be at risk. Right now, it costs millions, maybe tens of millions of dollars, to train these ChatGPT (models) and millions of dollars a month to operate, but that might not be the case in a few years.

Rob May

May cautions against investing at this stage, owing to the rapid speed at which AI is developing. He believes that, eventually, training a large language model may not be as expensive or labour-intensive. Currently, the power, infrastructure, cloud-computing capabilities, data centres and bandwidth required to train a large language model is estimated to cost at least $10 million dollars. Furthermore, this large cost does not even begin to cover the steep expenses associated with one of the most critical elements of AI: inferencing. Inferencing is the process by which a computer taps into all of the intelligence gathered and stored in the training phase to process and analyse new data (e.g. a ChatGPT query).

Sawyer says that creating a successful model is a bit of a double-edged sword in this regard. As user adoption increases, the collective costs associated with inferencing could become unbounded. Some companies are beginning to amortise training costs, aiming to ultimately pass off inferencing expenses to customers. Until that becomes scalable, Sawyer says that the reality of operating generative models is expensive regardless of end-user, and success really comes down to resources.

The bottom line—and this has been the dirty secret for some time, that’s now being thrown into the public sphere—is that whoever has the most computers wins, and I mean this exactly as it sounds. And this is true of not just consumer applications, like generating copyright for a marketing product or marketing campaign. It also has implications for defence. It has implications for spatial and satellite imagery, really any source of information.

Davis Sawyer

Big players with immense financial wherewithal, such as Microsoft, Google, Meta and Amazon, are inherently advantaged, though most AI development is playing out in private markets. According to May, there are plenty of viable investment avenues in this, although he cautions that none are without risk. The AI environment is dynamic and continuously evolving, and the most recent release of GPT-4 underscores the importance of patience by AI companies and investors alike.

May is constructive on AI use cases in specific industries, such as health care and finance, which have lots of existing data and potential efficiencies. May is most optimistic about a hybrid approach—combining vertically-integrated AI stacks, which could optimise certain workflows and change cost structures, with careful management that does not rely entirely on a costly algorithm that could be made redundant tomorrow. In other words, he believes a measured approach that contemplates gradual changes with the use of AI is the safest option for companies and investors at this stage.

Through his work at Deeplight, Sawyer has a great deal of experience helping industrial customers realise cost efficiencies by deploying AI models into consumer products. He points to the role of AI in the explosion of the smart home and surveillance marketplace, specifically residential home security. Home security cameras now offer a range of capabilities that were previously not possible without AI—for instance, a home security camera’s ability to distinguish between the motion of the family dog outside the door and that of a raccoon rummaging through the trash. His key takeaway, though, is how accessible AI-enhanced products have become.

The commodification of this intelligence, if you will, is a really powerful force. As someone in the know, let’s say, or in the research domain, you might think, ‘Well, that feature’s been doable for some time.’ But it’s not until that feature is cost-effective do you see it proliferate.

Davis Sawyer

Successful implementation of smaller-scale AI has led to the proliferation of reasonably priced consumer products. Sawyer is optimistic that the cost-intensive large language models of today may have similar and even more sophisticated applications in the future. For investors, this still-nascent stage of generative AI may require patience, but the speed at which it continues to evolve indicates that the future could be bright for this impressive new frontier of foundation models.

Subscribe to “Double Take” on your podcast app of choice or view the Investing in Generative AI and Shrinking Machine Learning episode pages to listen in your browser.


[1] Techopedia. As of 5 April 2023. https://www.techopedia.com/definition/34826/foundation-model#:~:text=Unlike%20narrow%20artificial%20intelligence%20(narrow,from%20one%20task%20to%20another.

Authors

Jack

Jack Encarnacao

Research analyst, investigative, Specialist Research team

Raphael J. Lewis

Raphael J. Lewis

Head of specialist research

This is a financial promotion. These opinions should not be construed as investment or other advice and are subject to change. This material is for information purposes only. This material is for professional investors only. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell investments in those securities, countries or sectors. Please note that holdings and positioning are subject to change without notice. This article was written by members of the NIMNA investment team. ‘Newton’ and/or ‘Newton Investment Management’ is a corporate brand which refers to the following group of affiliated companies: Newton Investment Management Limited (NIM) and Newton Investment Management North America LLC (NIMNA). NIMNA was established in 2021 and is comprised of the equity and multi-asset teams from an affiliate, Mellon Investments Corporation.

Important information

This material is for Australian wholesale clients only and is not intended for distribution to, nor should it be relied upon by, retail clients. This information has not been prepared to take into account the investment objectives, financial objectives or particular needs of any particular person. Before making an investment decision you should carefully consider, with or without the assistance of a financial adviser, whether such an investment strategy is appropriate in light of your particular investment needs, objectives and financial circumstances.

Newton Investment Management Limited is exempt from the requirement to hold an Australian financial services licence in respect of the financial services it provides to wholesale clients in Australia and is authorised and regulated by the Financial Conduct Authority of the UK under UK laws, which differ from Australian laws.

Newton Investment Management Limited (Newton) is authorised and regulated in the UK by the Financial Conduct Authority (FCA), 12 Endeavour Square, London, E20 1JN. Newton is providing financial services to wholesale clients in Australia in reliance on ASIC Corporations (Repeal and Transitional) Instrument 2016/396, a copy of which is on the website of the Australian Securities and Investments Commission, www.asic.gov.au. The instrument exempts entities that are authorised and regulated in the UK by the FCA, such as Newton, from the need to hold an Australian financial services license under the Corporations Act 2001 for certain financial services provided to Australian wholesale clients on certain conditions. Financial services provided by Newton are regulated by the FCA under the laws and regulatory requirements of the United Kingdom, which are different to the laws applying in Australia.

Explore topics