AI’s Inevitable Slowdown: Separating Fact from Fiction

David Doherty
4 min readApr 24, 2023

Artificial Intelligence (AI) has experienced a meteoric rise in the past few years, with breakthroughs in various areas, including image recognition, natural language processing, and autonomous systems. However, as with any technology, there is a limit to how far AI can progress. In this article, we will explore the current state of AI and examine the challenges that may slow its progress in the future.

Text-to-image/audio/video: this is where most progress will be made in the ‘short’ term.

Text-to-code: will slow down

Questions / Answers: will slow down

Result of asking AI to draw a chart of the S-curve on innovation

Text-to-image/audio:

AI generated stories, music, images. This is moving in leaps and bounds. The creative process is one where bad ideas are part of the journey. AI can generate thousands of bad ideas but come up with the occasional gem.

Oasis: https://www.billboard.com/music/music-news/liam-gallagher-response-oasis-ai-album-aisis-1235310076/

Drake: https://www.npr.org/2023/04/21/1171032649/ai-music-heart-on-my-sleeve-drake-the-weeknd

Artists have mixed responses, but it is undeniable that the creative process has become less human-dependent. Imagine a site that generates millions of songs and then lets users surface the best ones. There will be a few bangers in there, and we wouldn’t have to pay for artists’ Lambos or houses in Calabasas.

That said, art is a passion for most and not an occupation. People do it for free anyway, so will machines doing it for free make that much of a difference. When I walk in Central Park I see people singing, playing drums, mandolins, and so forth; mostly for love and partly for the few dollars they make from donations. Will AI make that much of an impact?

That said, if you’re making money making logos, elevator music; or generic wall art; perhaps that process will be done by a much smaller cohort of AI-augmented professionals.

Text-to-code:

There are a few co-pilot add-ons for IDEs which will make developers more efficient, but won’t replace developers entirely. Most major productivity gains are already being unlocked. In order to keep the LLMs up to date, they need new data. StackOverflow has already got wise and wants to charge companies for the training data:

https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/.

Without new data, these LLMs will be providing incorrect results. For a layperson, imagine asking ChatGPT for instructions on your iPhone and getting answers for the iPhone 6. Still useful for some people, but not most.

Removing the developer completely is a huge leap from here. There are a number of impressive demos where a website can be generated from an instruction to a LLM. This will be powerful for experimentation. Say you want to see if there is demand for a specific D2C product, you can quickly generate the website, advertise and gauge the traffic.

Making websites fully functional, and more importantly, adding new features is much harder, especially if they become popular. It’s not to say that LLMs won’t help a lot, but think of it as an equivalent of Full Self Driving (FSD) for cars. There are use cases that are easier (e.g. smooth highway with clear lanes) and others that are REALLY hard (a deer running out onto a dirt road).

In my view, there will be net positive demand for engineers. LLMs will allow non-technical people to get their feet wet with creating technology. It will lead to a lot more websites/apps/etc, and more demand for maintenance/improvement/etc of them.

Questions / Answers:

Similar to StackOverflow, data ‘owners’ will get wise and put their data behind accounts/licenses/etc. In slow moving data spaces the horse may have already bolted. Imagine you run a horticultural website and your data has already been scraped. ChatGPT can already answer questions like “Do I need to stratify oak tree seeds?”. The answer to that question is unlikely to change.

However in industries with rapid data change or regulatory barriers LLMs’ progress will slow.

“Who are the best teams in the English Premier League” or “Is fasted training a recommended approach for achieving sports performance gains?”. Over time the answers change, and data owners will gradually put up barriers, many will be legal such as restrictive redistribution rights, put more data behind paywalls/licenses/etc. This will force LLMs to be more proactive in sourcing licensed data (increasing costs), or by sourcing data from lower quality sources (e.g. open chat forums).

In regulatory spaces there are various asymmetric payoffs, i.e. a small benefit for getting something right, and a large downside for getting something wrong. Health advice, financial advice, legal advice, and so on. Here there is a fight brewing. Should the LLMs offer ‘correct’ answers, or should the person providing the answers to the user ‘filter’/‘correct’/‘validate’ answers before passing to reader. This will slow down progress in many parts of life.

--

--

David Doherty

I write about Fintech, it's past & future, leveraging 20+ years of experience in leadership roles at large Fintechs