ChatGPT

Oliver Crane
Look Down, Not Up
Published in
9 min readJan 19, 2023

--

The story that encapsulated some of our thoughts over the summer has been ChatGPT.

Given the attention this has received in the media and the consequential discussions that involve many of our compounders including Google, Microsoft, and Meta, we thought it was worth investigating what is going on.

If you don’t know what we are talking about, late last year OpenAI released “ChatGPT3”, a language artificial intelligence (AI) model which interacts in a conversational way. Think of it as a calculator, but for words, information and pieces of text.

The dialogue format makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. It is a truly incredible innovation that is quite mind blowing to play around with, we recommend you try it yourself before we divulge.

https://chat.openai.com/

Firstly, who is OpenAI?

Founded in 2015 by Elon Musk and Sam Altman, it was originally a non-profit research company created in part because of its founders’ existential concerns about the potential for catastrophe resulting from carelessness and misuse of general-purpose AI. They wanted to create “safe AI” for the benefit of humanity and set about freely collaborating with other organisations and individuals in tech. Research and patents made by the company are intended to remain open to the public except in cases where they could negatively affect safety.

Elon ended up leaving the company in 2018 due to its conflicts with Tesla but is still a shareholder. LinkedIn co-founder Reid Hoffman was also an early backer, as were many other prominent technology VC funds like Tiger and Khosla.

Co-Founder Sam Altman is CEO and in 2019 he created a for-profit arm so it could more easily raise money to fund the computing power needed to train its algorithms. Enter Microsoft, who gave OpenAI $1 billion in funding and a sweetheart deal allowing them to use their cloud infrastructure in Azure to train their models.

OpenAI currently generates revenue from selling APIs to its image and language models. This allows developers and ancillary companies to build products or solve problems using their systems, and OpenAI clip the ticket each time they are used.

While it is unclear how much revenue OpenAI has generated, it is rumoured to be in talks with selling existing shares in a tender offer that would value the company at $29 billion, up from $14 billion when it last raised in 2021.

The company has also limited some venture investors’ profits to about 20 times their investments, with the ability to earn greater returns the longer they wait to sell their shares. OpenAI has said such capped investment structures were necessary to ensure that the value from OpenAI accrued not only to investors and employees, but also to humanity more generally.

It is certainly a unique new player in the technology universe.

After a few private demonstrations and releases to tech journalists and research partners, OpenAI’s first public release was Dall-E 2, launched in September 2021. This product enables users to create realistic art from strings of text like “an Andy Warhol-style painting of a bunny rabbit wearing sunglasses.” It is quite clever and fun to play around on. But it is just that, fun. It was hard to see what the real-world use cases would be for this tool, other than to demonstrate the language model’s skills and to show it off as a party trick.

Then came ChatGPT 3.

Large Language Models

Firstly, it is worth having a basic understanding of how a large language model (LLM) works. As apposed to supervised models, LLM’s training is done by the models themselves. These models are given an enormous amount of text and are then allowed long-periods of time to borough through it. The LLM then plays a game where the model takes away one word, and then it tries to guess that word it just removed. It will play this game trillions of times over, guessing right or wrong and consequently, teaching itself the statistical structure of the text. As it gets better, it is then able to predict what the next word could be given all the words seen before. This is how it constructs language.

A simple version of this that you can already relate to using is autocorrect on word, or when you write text on Gmail, and it will predict the ends of your sentences as you type.

You would also be familiar with using chat bots for customer service queries, but again, these are painfully limited.

ChatGPT is just the next iteration of this technology and is solving problems in ways that do baffle the mind. From simplifying complex university text, to writing poems in the style of Shakespeare, to giving detailed “how to” guides… it really is amazing what it can do. It even scored 52% on the SAT test.

It does have limitations though.

Firstly, and most importantly, it will give wrong answers. Perhaps more worryingly, it will do so in a very confident manner, meaning you might not suspect it is wrong.

Secondly, the data that it has learnt off is only up to 2021. Additionally, given that LLMs are not databases, they are text predictors, the inputs and models are harder to update and re-learn on the fly.

Thirdly, as these models are a function of the data and text they are given, there are inherent biases in their results. However, this is a challenge that almost all AI tools face.

So, will ChatGPT take my job?

This sort of technology is likely to disrupt many parts of our daily lives. But to calm concerns over it taking your job, it is more likely to be used as a tool.

To give you some idea of the impact that this type of tool can have though, 13% of jobs in the USA involve writing as their main task. That’s $675bn of value, or 3% of US GDP. Given it can help you write your essay, your news article, or your letter to Santa, it is already becoming very hard to distinguish whether a human or computer wrote a piece of text.

As these models have improved though, we are learning that they are capable of even more applications than was first imagined. For example, OpenAI discovered that this tool was really good at coding.

Microsoft’s CTO Kevin Scott shared that their programmers are already using LLM Co-Pilot to write basic code. This is a joint venture between GitHub (owned by Microsoft) and OpenAI that already has >100k daily users now. Co-Pilot is producing more than 30% of the code in the commits from Microsoft right now, freeing up time for these skilled developers to think more deeply about the problems and what they are trying to build and solve for.

The other important question that we are considering is… whether ChatGPT will disrupt Google Search?

Implications for Google

Firstly, it must be said that scale really does matter in this game. The bigger you can make the model, the more generalisation you get from them, and the broader the applications you can solve. OpenAI’s deal with Microsoft was pivotal to their success.

As a result, companies like Amazon, Microsoft and Google are at an advantage to smaller scale companies looking to replicate what OpenAI has done. These big tech firms hold significant market share in terms of cloud computing infrastructure and produce large cash flows that enable them to continue to invest heavily without outside funding.

Isolating Google, Search already uses AI models to improve its results and better imbed advertising. The latest is what they call the “Multitask Unified Model”, or MUM, which was 1,000x more powerful than BERT, their 2019 AI model. However, the human has always been the arbiter in these models, clicking on the links that it finds the most helpful, or clicking on the ads that are most relevant (and thus teaching the machine).

There is a distinct difference between search engines and a chat based LLM too.

Search engines are like indexes to the internet, which is like an endless database which updates constantly. You can’t ask ChatGPT for live scores in the Australian Open.

Additionally, the sort of searches where Google makes the most money — travel, retail, insurance, etc. — may not be well-suited for chat like interfaces. As we mentioned above, ChatGPT only uses data pre-2021, and thus it becomes out of date very quickly.

It’s not like Google have been sitting on their hands though.

They spend 12–14% of revenues on Research & Development, which is ~$160bn of accumulated spend in the last 6 years. That’s a lot of money, even if only a slice of it was going directly into AI.

They also acquired DeepMind in 2014 for ~$500mil, which has become their own AI “research institute”. Its Founder and CEO Demis Hassabis told reporters this week that they would potentially release the “Public Beta” of its reply to ChatGPT called Sparrow later this year.

CEO Sundar Pichai was also on the PR front foot this week, spruiking all of Google’s internal advancements.

However whether or not Google have a reply to ChatGPT is not really the question.

Over the past seven years, Google’s primary business model innovation has been to insert ever more ads into Search, a particularly effective tactic on mobile. However, this business model doesn’t work in a ChatGPT like product.

Thus, even if they have a superior version, releasing it might be killing their own golden goose. And herein lies the innovators dilemma.

So what happens next?

While we do not have an answer for Google’s strategy or plans with AI (no doubt they’ll be the focus of the next investor call though), we do have some thoughts about MOATs and technologies evolution.

Google is the default in almost every browser and on every phone, and over two decades, people have formed habits of using Google for everything. No matter how good alternatives may end up being, this competitive advantage takes time to disrupt.

The thing with disruption is that if left ignored, slow change can become quick change. Even though ChatGPT has many flaws compared to Google Search today, this technology is moving fast and may be radically better again in 6-months’ time.

Given the media attention and recent performance of the share price, Google’s management are no doubt wide awake to these questions.

Implications for Microsoft

While Google rightly has taken most of the spotlight in the debate, it is also worth briefly mentioning Microsoft’s role in this, because they seem to be the best placed of all big tech.

They have already integrated much of OpenAI’s technology into Azure and have flagged that ChatGPT will be available soon too. The announcement made this week has a great video of how this technology can be used today by developers.

There are also further rumours of an additional $10bn investment into OpenAI. If this were the case, we suspect there are further strategic partnerships involved. Perhaps OpenAI has worked out that Microsoft has the scale, vertical eco-system, and capabilities to monetize its innovations better than it ever could. Thus, OpenAI would be better off just focusing on maintaining its culture, developing new models and holding itself to the core mission of safe AI.

So, while being the sole cloud provider for OpenAI has probably been expensive for Microsoft, it has set them up to provide the infrastructure that could underpin so much of silicon valley’s next phase of product creation.

It has also given Microsoft a front row seat into remarkable innovations. Perhaps better than anything the same amount of internal R&D spend could generate. The bonus is being able to integrate these AI models into their product suite at scale before their peers.

Returning to the Google question — Microsoft’s search engine Bing is 2nd to Google with only ~8% share and is relatively small in terms of revenue contribution to the group. If incorporating ChatGPT-like results into Bing risks the advertising business model but provides the opportunity to gain massive market share… that could be a bet well worth making.

The information provided in this newsletter is for general information purposes only and does not take into account your personal circumstances. It is not intended to be a substitute for professional advice, so please seek the advice of a licensed financial advisor before making any investment decisions. The author and publisher of this newsletter may hold positions in the stocks mentioned. The author is a representative of Lugarno Partners Pty Ltd AFSL № 508934.

--

--