BACK
Friday December 11, 2020 |Q&A

What are some uses of GPT-3?

GPT-3 debuted this past summer and made for some big headlines. Since its debut, various implementations have garnered attention for a short period. There hasn't been a significant purposeful implementation yet and that's to be expected of a new technologies that some are weary of making public.

For the most part, my knowledge of GPT-3 was limited to some of the articles and demonstrations that have come across my radar. After applying for access, I didn't look much more into specific implementations and use cases. Over the past 24 hours, I spent some time catching up on the developments by listening to the podcast episodes from Exponential View and Machine Learning Street Talk, and read articles on Twilio for specific implementations, as well as Towards Data Science and (Towards AI)(https://medium.com/towards-artificial-intelligence/crazy-gpt-3-use-cases-232c22142044) on specific use cases.

In most regards, there's reason to be amazed by the capabilities. This is a significant step forward from GPT-2 and it can provide great results in many scenarios.

There also obvious limitations and that's to be expected. This is step 3 on what's designed to be a exponential growth curve. Based upon the podcast discussion with Sam Altman, part of the purpose of releasing GPT-3 now is to see how it's used, find the limitations, and refine. It is wrong to classify GPT-3 as artificial intelligence of any sort. It merely takes a brief input, infers keywords and meaning, and spawns a human readable answer from a vast data source. GPT-3 reminds me of talking to someone who is extremely knowledgeable about a vast number of topics, and is extremely confident in the answers regardless of the extent of that knowledge. The specific answers may not be entirely correct, yet the general theses are relatively "correct" (given the extent of knowledge)

Some of the most significant use cases involve taking a small amount of text and creating something that closely resembles the result of extensive work on the part of humans. This includes written language, as well as computer code. In both cases, the answers tend to somewhat fuzzy and are generally approximations of the desired end result. The results still require refinement to be entirely ready for the real world.

Those limits make deploying GPT-3 without some level of control and filtering problematic. It'd be akin to trusting that Google always provides the best results on the first page, regardless of the input. It's possible to find bad results on Google after 20+ years, and it's much easier with GPT-3 now.

There seems to be a distinction between bold and useful. The most impressive achievements, i.e. coding an entire page from a brief input, have limitations because the output must be so specific in the real world. It's impressive and not all that practical.

At this point, there's more potential in creating results that can be fuzzy and not entirely correct. GPT excels at creating fictional narrative, finding unrealized intersections, and producing unexpected outputs. Those are the elements to harness. The baseline use seems to be in creating a filtered view via inputs. Train it to see the world through a specific focused view, and allow it look at much more to produce ideas and results that humans would not naturally come across, but may find interesting as an idea.

about | contact
twitter github upwork linkedin
© Copyright 2021 All rights reserved.