ChatGPT | Implications of Generative AI

- 4 min read

What is ChatGPT?

OpenAI’s ChatGPT chatbot, released November 2022, is so effective at responding to chat prompts that it is already being applied to wide range of use cases. ChatGPT surpassed 1M users in 5 days as it went viral earlier in December 2022. For comparison, it took Netflix 41 months to reach 1M user count, FB 10 months, Spotify 5 months, and Instagram 2.5 months. Given the astounding user metrics, it’s worth taking a closer look at why users are infatuated and where ChatGPT may be disruptive. This post focuses primarily on the implications and assumes basic understanding of ChatGPT.

ChatGPT's meteoric rise in popularity.

image

Ref: Chartr

Speculation exists that ChatGPT may disrupt current search engines. ChatGPT provides the user with a direct answer, whereas search engines effectively provide the user with a research project. There is a clear paradigm shift in the flow of information. ChatGPT formulates a single, detailed answer using billions (soon trillions with GPT-4) of parameters, whereas search engines require users to sift through numerous sites. While making queries on ChatGPT, information flows to the user instead of the user manually scraping blogs, forums, and listed sites for data.

ChatGPT wasn’t created as a replacement to search. The potential search use case of ChatGPT is a corollary of how effective its conversational prompt replies are. Results returned by ChatGPT may be inaccurate, without citation, and are limited to the time period (up until 2021) with which the model was trained. Currently, there’s a category of user queries that GPT may provide a better user experience for, a category that Google search may be more useful for, and a category where the user may be content with either method.

The case for ChatGPT-like, generative models disrupting search gets stronger when considering that they could be specifically designed for the search use case. It’s not difficult to imagine a “SearchGPT” that has parameters for fine-tuned search queries. For instance, selecting certain datasets (Wikipedia, commons, scientific journals etc.) to customize search queries could allow more strict, fact-based searches or more creative, subjective results. Instead of thinking in terms of one search method replacing another, it seems more likely that current search engines will integrate generative AI responses as another tool in their repertoire to build the best search experience for users. In fact, Google has already signaled a “Code Red” to respond to the disruptive advent of GPT being applied to search.

Will ChatGPT disrupt SaaS offerings and applications?

Imagining the use of ChatGPT as a function with restrictions around input and output, gives further insight into how it may be integrated into apps. There exist different ways of filtering training data to shape the responses, as well as tuning such as the model’s temperature setting, to optimize for different use cases. It seems likely that simple rule-based systems could effectively wrap generative models, placing constraints or invariants to shape an app or service. For instance, with encoders and decoders for text to speech, it may be possible to bootstrap voice assistants and customer service. There are many SaaS offerings and applications that may be transformed with the use of GPT tailored to its niche.

Several GitHub repos are already reverse engineering the ChatGPT API to provide a usable interface to developers. Rumors also posit that GPT-4 will be trained on trillions of parameters to GPT-3’s billions. While we will have to wait and see how GPT improvement scales with increased training data, it’s already clear applications that send and receive text data may be disrupted by language models in the long-term. Some clever users have even asked ChatGPT for code which produces 3D STL file and text which readily converts into melodies, effectively transforming ChatGPT’s text output to several other domains.

What are the broader implications of generative AI?

Functional Generative AI has another interesting corollary. If generated images, text, and audio data are so realistic that they can fool human judges, it’s possible to use generative models to create training data. There are papers that have already examined how augmenting datasets with generated data improves the power of the models. An obvious cycle that emerges:
  1. Increase training data to improve generative output.
  2. Improved generative output leads to increased training data.
  3. Repeat ad infinitum.

Another example of the potential for generative AI to produce usable data is with the use of Generative Adversarial Networks (GANs). Using a Super Resolution GAN (SRGAN), for instance, an image larger than the original input image can be generated. Potential data producing or data transforming pipelines become plainly apparent.

With the effective use of generative AI to augment datasets, we have access to an abundance of data. Perhaps generative AI will resolve the bottleneck of immense training data requirements and shift focus to cleaning generated data.

As SaaS offerings and various applications scramble to integrate ChatGPT and other generative AI tools that users crave, we’ll learn more about implications for copyright, competitive advantage, and how companies reshape the moat around their offerings.

rss facebook twitter github youtube mail spotify lastfm instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora quora