In artificial intelligence, scale refers to how to improve the existing models. This could mean increasing data, power, or model complexity. As artificial intelligence further advances, the costs of this have only gone up. This has brought about a significant paradigm: artificial intelligence will plateau as the costs go up. On March 11th, I attended the event “AI Won’t Plateau – if We Give It Time To Think,” which focused on challenging this paradigm.
While the current approach to artificial intelligence for the last five years has been to scale artificial intelligence. It has become increasingly more unsustainable due to the exponential rise in costs. This has caused “today's frontier models can cost hundreds of millions of dollars to train, and there are reasonable concerns among some that AI will soon plateau or hit a wall” (Brown, 0:36, 2025). However, Brown has argued that this is a reasonable concern or assumption. It’s not going to happen. That’s because this challenge assumes that artificial intelligence can only grow by scaling up the model further. One of these ways comes from the power of thinking. Drawing from chess “deep Blue thought for a couple of minutes before making each move” (Brown, 6:53, 2025). This means that AI can improve not only by increasing its scale but also by allowing AI to take more time to analyze and refine its output. There were also cases of this approach in poker and Go. Effectively illustrate the potential this approach has in the future as scaling gets even more expensive. There is a cheaper alternative, and eventually it will be increasingly more beneficial and viable in the future.
The drawbacks of letting AI think are that it takes time to respond each time. This means that the user will likely end up waiting for a response, “machine learning algorithms adapt their algorithms with iteration” (Pooyandeh, 2021, Section 3.1). Having to wait on artificial intelligence to respond can be annoying, however, optimizing and making artificial intelligence efficient results in better responses and is cheaper. Similarly to how humans need time to think, thinking allows AI to make real-time decisions, and reason themselves. “The purpose of most technologies is to increase the accuracy, reduce the delay, improve the energy consumption, improve the cache hit rate, and improve the QoS/QoE” (Pooyandeh, 2021, Section 5). Allowing AI to think simply boosts the scaling. These kinds of models represent a shift in focus for artificial intelligence in the coming years. The ability to think means a more efficient and effective model. The potentials of this type of artificial intelligence are endless, from designing to solving, they will be able to make better overall decisions, when instant responses are not necessary.
In conclusion, the event challenged the belief that AI will plateau due to increased costs with scaling. Instead of explaining models, AI can also improve by taking the time to think. While this approach delays responses, it enhances accuracy and decision-making. This makes AI way more sustainable and capable as the costs continue to rise. Greater intelligence achieved from thinking can present a much more cost-effective solution to traditional methods.
Resources
Brown, N. (2025, February 15). AI Won’t Plateau — if We Give It Time To Think [Video]. TED. https://www.ted.com/talks/noam_brown_ai_won_t_plateau_if_we_give_it_time_to_think
Pooyandeh, M., & Sohn, I. (2021). Edge network optimization based on ai techniques: A survey. Electronics (Basel), 10(22), 2830-. https://doi.org/10.3390/electronics10222830