Large language models like ChatGPT are only as good as the data on which they are trained, and increasingly the gatekeepers of this data are looking to capitalise on the new-found, generative artificial intelligence (AI)-fuelled demand. A recent revolt by moderators on the social forum Reddit illustrates this trend. Reddit’s plan to charge for access to its application programming interface (API)—which is a virtual firehose that spits out all content posted to Reddit in a developer-friendly format—has sparked acrimony among the Reddit community.

Tension is brewing between the steady march of AI and the longstanding spirit of the open internet, where access to information is presumed to be free and control is decentralised. To help discern the fault lines in this hastening turf war, Double Take welcomed AI ethicist and startup founder Kelvin Lwin, who was formerly a senior deep learning instructor at chip giant Nvidia. According to Lwin, open APIs have allowed platforms to spawn and scale innovations at a faster clip than they ever could have done on their own, but the advent of paywalled AI models changes that proposition.

Every platform, I think, wanted to become an ecosystem. So you want to encourage developers, right?…It’s the right intuition, but then that opens yourself (up) to these kinds of seminal moments now where it comes to a head—where developers have different incentives, the platform owners have different incentives, but everybody’s trying to make money. So then, who gets to eat?

Kelvin Lwin, founder and chief executive officer of ALIN.ai

Lwin explains that an awkward middle ground has developed, where open-source data on platforms such as Reddit is considered ‘free to use’ yet at the same time ‘not for commercial use’.

I think, generally, it’s a very inherent problem in our tech field because all of us want to help the world and contribute to the world, so open source sounds good on paper. But then, when you actually have these monetisation and business model pressures, what we usually have seen is that open-source projects themselves, if they’re not supported by some profit-making entity, they tend to die off.

Kelvin Lwin

From Lwin’s perspective, the secret sauce of a large language model is a blend of the source data that feeds the algorithm, and the method by which that data is wrangled through the model to produce an output.

Who gets to determine which recipe(s) prevail, and who if anyone could the recipe(s) enrich? Lwin suggests that these questions may be key in the development of AI.        

In general, decentralisation was the ethos of technology in Silicon Valley in general, right? That’s why you get the Bitcoin and blockchain and all those decentralisation kinds of things. And then there was a joke that, how come for all the talks about decentralisation, everybody was happy with over 100 million users going and relying on one company’s API, ChatGPT, overnight?…I would argue AI is actually a trend toward more centralisation because you have all this data, sensory capacity, compute (capacity) and pattern recognition capacity. So, can you make centralised decision-making? Probably.…There’s an underlying bigger arms race going on with what is the right model to get it out to the people.

Kelvin Lwin

According to Lwin, even platforms such as Reddit that are expressly decentralised are experiencing a tilt toward centralisation of control.

It has a decentralised model in terms of the users, the content, the moderators, right?…But then from the company point of view, especially when they’re trying to monetise it or really prop up the value, then it becomes centralised, right? The CEO is making the decisions on the APIs…Then the users are going, ‘I thought we own this platform?’…‘I thought this is a place for us?’ Because they don’t have to care about making money and (being) profitable and things like that, but the CEO and the centralised, the corporate entity do. So I see it as, again, the fight between the two modes of thinking.

Kelvin Lwin

Lwin anticipates the ramifications of this showdown to be profound and pervasive.

I think we are at a critical juncture of humanity in some ways. What it means to be human, what can humans do, what can technology or AI or robots do? All this sort of stuff that was in the realm of philosophy and religion, kind of entering the public discourse…Remember, AI only learns from the example we give it.

Kelvin Lwin

Subscribe to ‘Double Take’ on your podcast app of choice or view the Reddit and the AI turf war episode page to listen in your browser.

Authors

Jack

Jack Encarnacao

Research analyst, investigative, Specialist Research team

Raphael J. Lewis

Raphael J. Lewis

Head of specialist research

Comments

Your email address will not be published.

Newton does not capture and store any personal information about an individual who accesses this blog, except where he or she volunteers such information, whether via email, an electronic form or other means. Where personal information is supplied, it will be used only in relation to this blog, and will not be collected or stored for any other purpose. Comments submitted via the blog are moderated, and, as a result, there may be a delay before they are posted.

This is a financial promotion. These opinions should not be construed as investment or other advice and are subject to change. This material is for information purposes only. This material is for professional investors only. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell investments in those securities, countries or sectors. Please note that holdings and positioning are subject to change without notice. This article was written by members of the NIMNA investment team. ‘Newton’ and/or ‘Newton Investment Management’ is a corporate brand which refers to the following group of affiliated companies: Newton Investment Management Limited (NIM), Newton Investment Management North America LLC (NIMNA) and Newton Investment Management Japan Limited (NIMJ). NIMNA was established in 2021 and NIMJ was established in March 2023.

Explore topics