Product Thinking: ChatGPT’s Model Selection Dropdown

Originally Posted: June 25, 2025

OpenAI’s predicament as “The accidental consumer tech company,” makes them a fertile ground for product critique. Their now flagship product, ChatGPT was launched 7 years after the founding of the company, which means there was a lot built (product and operations) before the company really knew what it was all about.

According to accounts, ChatGPT was initially a proof of concept that caught lightning in a bottle, and is now setting land speed records in the form of user adoption. This leaves various bits and pieces of the product that haven’t caught up with its position in the market. 

One of my favorite examples is the Model Selection Dropdown or MSD (hopefully they have a better internal name). It’s perhaps the ChatGPT feature that best illustrates OpenAI’s split personality between research lab and consumer company.

As a thought experiment, what if OpenAI only ever built models and public APIs? In this world, some other company was the first to figure out the chat interface, and gain the word of mouth virality of ChatGPT. I think it’s fair to say that in this scenario, the company that cracked consumer adoption would have abstracted away the model(s) underneath. And of course, at its launch, ChatGPT didn’t have model selection either, because there was only one model.

TANGENT

Incidentally, this is a realistic hypothetical. OpenAI made versions of the GPT model that powered the initial version of ChatGPT available through API in March 2022 which is a full 9 months before the launch of ChatGPT itself. 

Just think of all the new AI stuff that comes out in 9 months these days and think about how a breakthrough LLM was just sitting there waiting to be productized.

There were some additional improvements that OpenAI kept for themselves for the ChatGPT launch, which was a fine tuned version of GPT-3.5, but there was certainly enough there between March and November 2022 for someone else to have figured it out. 

I can say this confidently because it was the time that my startup decided to go all in on AI powered features. As an example, we built FeedbackAI which helped managers write thoughtful employee feedback based on just a few adjectives. Others, like Jasper.ai had more success, and went from X to Y insanely fast. But no one figured out the generalized chatbot interface, or at least not anyone that found distribution. It’s fair to say that at this point in the company’s history, OpenAI didn’t have a huge leg up on the distribution front, so that doesn’t fully explain the oversight by everyone else. 

Let’s look more closely at the feature.

What is it?

The Model Selection Dropdown or MSD is a highly prominent feature in the ChatGPT desktop and webapp that allows the user to select which of OpenAI’s proprietary models to use in a chat.

Notably, it doesn’t appear in the logged out version of the webapp:

Product thinking

OpenAI has famously struggled with naming and Product Marketing in general. Sam Altman, has said that at OpenAI “product decisions are downstream of research decisions,” so at least historically there haven’t been the usual feedback loops back from the customer through product to the engineering org. I’ve heard Kevin Weil, the new-ish CPO at OpenAI indicate in recent interviews that this is starting to change, but this seems like a new muscle which isn’t surprising given OpenAI’s split personality between research lab and incidental consumer product company.

After the success of ChatGPT, the research function of OpenAI continued to crank out new model versions. Given the center of gravity of the company and the primordial state of other functions at the time, my assumption is that the MSD was viewed as a “container feature.” That is, it would allow the company to launch new models as they came out behind the dropdown, without disrupting the user experience as it existed, and risking the intense product market fit by introducing unintended consequences.

There’s an interesting clue in how o3 is positioned in the MSD. The label in the app says “uses advanced reasoning” which is an engineering centric view of the feature. Increasing test-time compute, ie “reasoning” was certain a huge technical innovation, but I’m not sure how an average user would know when to use “advanced reasoning” vs 4o which is labeled as “great for most tasks.”

The MSD also provides an easy way to handle feature gating for pricing purposes. Of course not all feature gating is done behind the MSD, but users who want every new model iteration and can appreciate the finer technical points, such as “advanced reasoning” can pay more to see these features appear in their interface.

Going back to our thought experiment, while a consumer-first company would likely have abstracted away the model(s) used behind the scenes, the MSD is a more tangible way to handle price discrimination vs just saying “if you pay more, you get a smarter chatbot, trust us.”

TANGENT

My guess is there was (and perhaps still is) quite a bit of internal debate about how to package model improvements. I could see engineering caring most about breakthroughs that are technical in nature, while viewing smaller improvements that might be more “marketable” as trivial. I imagine this would put other functions with less organizational clout in a tough position, not wanting to be perceived as slowing things down, but also wanting to provide some structure in terms of packaging and pricing. 

Certainly the hire of Kevin Weil as CPO and the even more recent hire of Fidji Simo as CEO of Applications, will help to balance this out. I also wonder if there is a yet to be hired Marketing oriented executive also required. I can only imagine the challenge that person would have in building the conviction (and internal credibility) needed to wrangle the hot potato of product packaging.

Opportunities and Ideas

  1. User-centric labeling of the models. The people who care about the technical details will consume that content elsewhere.

  2. Add a “let ChatGPT choose” option to the MSD. Good way to experiment with abstracting away the selection. When users select this, do they every switch back.

    • There could still be visual cues on the product that show which model is being used

    • Similar to “searching the web” could say “using o3 advanced reasoning”

  3. Make “let ChatGPT choose” the default in the free version with caps on power features. This would give free users an opportunity to experience the benefits of upgrading, while still maintaining price discrimination between tiers.

    • Have an option to turn on or off premium features so free users could ration their usage to more important queries. Ultimately, they won’t want to do this

    • Ask mid-query if the user wants to use some of their premium credits, ie “this question could benefit from deeper reasoning, use X credits?”

FINAL TANGENT

Incidentally, Cursor uses a similar paradigm to my recommendation.

Cursor is a highly intentional product company, so while I think a similar approach makes sense for OpenAI, Cursor certainly has additional considerations. A few which come to mind:

  1. Cursor’s highly technical users would demand flexibility. Engineers famously debate the tradeoffs between other technical decisions like Ruby vs Python, PostgreSQL vs NOSQL, etc so of course similar tribalism will emerge with models. Having a highly visible choice mechanism allows Cursor to satisfy all tribes. 

  2. They need a mechanism for enterprise customers to control model usage and allow for the use of in-house private models.

  3. They want to play nice with all model companies. Cursor is better off in a world where there are lots of competitive models, and they get to be the gatekeeper to users.