One Year After the Launch of GPT5 and Groq4: Analyzing the Impact on NVIDIA’s Revenue and TSMC’s Market Capitalization

How GPT5 & Groq4 Will Boost AI Semiconductor Sales and Shift Market Capitalization

Explosive Demand for AI Semiconductors

Since the GPT-4 phenomenon, the demand for data center AI chips has exploded. NVIDIA’s data center revenue surged by 112% year-over-year, reaching a quarterly revenue of $30.77 billion. This revenue now accounts for over half of the company’s total sales, driven by hyper-scale cloud providers investing in ultra-large models and recording three-digit growth for two consecutive quarters.

• TSMC’s Growth: With a booming AI chip manufacturing market, TSMC’s HPC (High-Performance Computing) platform revenue increased by 58% year-over-year, contributing 51% of its annual revenue forecast for 2024. In fact, the TSMC CEO stated that “AI accelerator-related revenue has tripled in 2024 and will more than double in 2025.”

• GPT-5’s Influence: The anticipated increase in training and inference demand from GPT-5 will further drive revenue growth. It is reported that OpenAI deployed approximately 25,000 NVIDIA A100 GPUs for training GPT-5, meaning the model’s success will likely translate into massive GPU purchases and expanded data center investments.

• Hyper-Scale Expansion: This trend is reinforced by a reported 82% surge in global hyperscaler data center CapEx in Q3 2024, where AI infrastructure investments played a key role.

Skyrocketing Market Capitalization

Following the introduction of GPT-4, as AI semiconductor demand exploded, the market values of NVIDIA and TSMC soared:

• NVIDIA’s Valuation: NVIDIA’s market capitalization jumped from $1.2 trillion at the end of 2023 to $3.28 trillion by the end of 2024—a gain of more than $2 trillion in just one year. At one point in October 2024, NVIDIA even surpassed Apple as the world’s most valuable company, reflecting the fervor among investors.

• TSMC’s Performance: Riding the AI boom, TSMC’s market capitalization climbed to approximately $833 billion in October 2024, with its stock price surging over 90% year-to-date. TSMC’s position as an exclusive manufacturing partner for cutting-edge AI chips from NVIDIA, AMD, and others has directly translated the surge in AI demand into impressive earnings and market value.

• Future Projections: One year after the launch of GPT-5 and Groq-4, NVIDIA’s contribution from data center sales is expected to rise even further (currently over 50%), and TSMC’s share of HPC/AI chips is projected to grow—thus strengthening both companies’ financial muscle. Notably, TSMC now enjoys 74% of its revenue from advanced processes below 7nm, directly reaping the benefits of the AI era.

Server Expansion and Infrastructure Scaling

The advent of ultra-large models such as GPT-5 is accelerating the race among global cloud companies to expand their data centers.

• In 2024, big tech companies are estimated to have deployed facility investments worth approximately $236 billion—more than a 50% increase year-over-year. Companies like AWS and Google plan to invest tens of billions of dollars in AI infrastructure over the coming years.

• CapEx Insights: In Q3, 40% of global data center CapEx was allocated to AI infrastructure (accelerated servers), fundamentally altering the server market. NVIDIA GPU-based AI servers now account for 40% of OEM server sales per quarter, with AI training-specific server demand offsetting the slowdown in the general server market and driving double-digit growth.

• Looking Ahead: One year after GPT-5’s launch, major cloud providers are expected to build data centers equipped with more H100/H200 GPU pods and Groq accelerators, while enterprises expand their AI adoption. This will likely sustain an annual growth rate of 30–40% in AI semiconductor demand. In summary, in the era of GPT-5/Groq-4, both NVIDIA and TSMC are poised to enjoy unprecedented AI market success, exceeding investor expectations in both revenue growth and market capitalization.

One Year After the Launch of GPT-6 and Groq-5: Next-Generation Performance and Market Growth Prospects

Enhanced Computational Scale and Efficiency in GPT-6

• GPT-6 Advancements: GPT-6 is expected to have significantly improved computational capabilities and intelligence compared to previous generations. While GPT-5 may require an estimated 1.7×10^26 FLOPs (floating-point operations) during training, GPT-6 could demand exponentially more or evolve through optimized model structures for greater efficiency. OpenAI has mentioned that “as models become smarter, high-quality data and inference optimization become more important than sheer data volume,” suggesting that GPT-6 will likely focus on efficiency improvements.

• Increased Hardware Requirements: Despite efficiency gains, any increase in parameter count or multimodal processing will demand far greater computational power than GPT-5. NVIDIA’s next-generation GPU architecture—following the Blackwell series—is expected to enhance computational speed and energy efficiency. Conversely, the Groq-5 accelerator will build on its LPU (Low-Power Unit) architecture, optimizing for ultra-low latency and high-efficiency inference.

• Benchmark Success: Groq’s LPU has already demonstrated impressive results by generating 241 tokens per second on the Meta Llama2-70B benchmark—more than twice the speed of comparable GPUs—and achieving ten times higher energy efficiency. Groq-5 will further reinforce these advantages by accelerating GPT-6 inference even more efficiently on the same power budget.

Comparative Analysis of Computational Performance and Energy Efficiency

• GPU vs. LPU: When comparing next-generation NVIDIA GPUs with Groq-5, GPUs continue to offer versatility, whereas specialized chips like LPUs are expected to lead in energy efficiency. Groq has claimed that its LPU is “at least 10 times more energy efficient than GPUs,” with the GroqCard consuming only 1–3J per token compared to 10–30J for NVIDIA GPUs.

• Energy Efficiency Metrics: In the GPT-6 era, the watt-per-token metric will be critical due to data center power constraints. While NVIDIA aims to boost performance-per-Watt through TSMC’s 3nm/2nm processes and HBM3e memory, the inherent overhead in general-purpose architectures may leave room for Groq-5’s streamlined, inference-specialized design.

• Service Deployment: In practical applications, a mixed approach is expected: GPUs will dominate training tasks while LPUs (like Groq-5) will be preferred for large-scale real-time inference, combining explosive computational power with unmatched efficiency.

Future Outlook for Next-Generation AI Accelerators

• Market Expansion: The emergence of GPT-6 and Groq-5 is predicted to drive a substantial expansion in the AI accelerator market. Forecasts suggest that the global market for AI chips and accelerators will grow from about $11 billion in 2024 to over $130 billion annually by 2030—a tenfold increase—with NVIDIA maintaining a dominant 74% market share.

• Revenue Projections: For instance, Mizuho Securities projects NVIDIA’s AI-related revenue could reach $259 billion by 2027—over four times its current level—with widespread adoption of GPT-6 playing a significant role.

• Infrastructure Investments: As enterprises across various sectors begin to adopt generative AI, cloud service providers and large corporations alike will build their own data centers with next-generation GPUs and AI accelerators. However, the competitive landscape might shift as hyperscalers accelerate their development of proprietary chips—similar to Google’s TPU or AWS’s Inferentia—to cut costs by more than 30%. Despite this, TSMC is expected to benefit from most of these custom chip orders due to its robust foundry capabilities.

• NVIDIA-TSMC Alliance: The close collaboration between NVIDIA and TSMC continues to yield attractive performance metrics, with NVIDIA’s market capitalization nearing $3.6 trillion by the end of 2024 (the world’s second-highest) and TSMC setting new quarterly records for HPC demand.

AGI Development: GPU Demand and Data Center Scale Analysis

Comparing AGI and Existing Model Computational Requirements

• AGI’s Immense Computation Needs: Artificial General Intelligence (AGI) is anticipated to perform human-level diverse tasks, demanding exponentially more computation than any current AI model. While models like GPT-4 or GPT-5 already execute on the order of 10^25–10^26 FLOPs, AGI could require several times to tens of times more, necessitating massive scaling of computational resources.

• Industry Estimates: For example, some investors estimate that “enough GPUs to train 1,000 GPT-5-level models will be sold by 2026,” based on an assumption of 1.7×10^26 FLOPs per individual model. AGI, being more complex, might demand over ten times the computation, with training costs reaching several billion dollars. Morgan Stanley, for instance, estimates that training GPT-5 required around $225 million worth of NVIDIA hardware (approximately 25,000 A100 GPUs), and AGI could require several times that scale.

• Extreme Projections: Some analysts predict that achieving AGI might require a $1 trillion investment and consume 20% of the US’s electrical power, emphasizing the need for several generations of hardware innovation beyond today’s H100/H200 GPUs.

GPU Quantity and Infrastructure Requirements for AGI

• Massive GPU Clusters: Implementing AGI will require unprecedented GPU clusters and data center infrastructures. Today’s largest AI training clusters, comprising tens of thousands of GPUs (e.g., Microsoft and OpenAI’s supercomputer with over 10,000 A100 GPUs), would need to scale up to hundreds of thousands or even millions of high-performance GPUs in parallel for AGI training.

• Enormous Capital Expenditure: The initial investments alone could amount to trillions of dollars, not to mention the significant costs associated with constructing data centers, cooling systems, and power supply infrastructures.

• Industry Voices: Mustapha Suleyman, head of AI at Microsoft, has noted that “with current-generation hardware (e.g., NVIDIA Blackwell GB200 GPUs), AGI is unattainable, and two to five more generations of hardware improvements are needed.” This implies that AGI will depend on the technology evolution of NVIDIA GPUs over the next 5–10 years.

• Current Trends: As of 2024, major US cloud providers are already pouring unprecedented amounts into AI infrastructure—Q3 spending among US hyperscalers surged by 82% year-over-year, with accelerator servers now dominating overall server investments. In an AGI scenario, these investments could increase even further, potentially requiring the simultaneous construction and operation of multiple exascale computing centers for a single project.

Power Consumption and Physical Infrastructure Demands

• Massive Energy Needs: Running AGI-level computations continuously will inevitably consume enormous amounts of power. Even GPT-4 models require significant electricity for inference services; AGI is expected to demand supercomputer-level power consumption. One report even suggested that by 2030, AI and superintelligence might consume up to 20% of the US’s total electricity—a scale comparable to the energy usage of tens of millions of households.

• Infrastructure Challenges: AGI data centers will require more than just high GPU counts; they will also demand advanced power grid reinforcements and ultra-efficient designs. Future semiconductor processes (progressing from 3nm to 2nm to 1nm) will be critical for increasing energy efficiency, alongside advanced cooling systems such as immersion cooling and modular data center designs. NVIDIA is already introducing liquid-cooled GPU racks and proposing solutions to improve data center power efficiency (PUE).

• Facility Scale: The physical scale of an AGI data center could be enormous—accommodating hundreds of thousands of GPUs might require server farms spanning multiple soccer fields, complete with power substations on a comparable scale. For example, Tesla’s upcoming Dojo supercomputer, targeting exaflop performance, is expected to demand huge power and cooling resources, and AGI would require facilities even larger in scope.

• Summary: Realizing AGI will demand at least ten times the hardware and infrastructure investment of today’s GPT-4/5 models, posing new challenges for GPU suppliers like NVIDIA and manufacturers like TSMC. This transformation will also open up substantial investment opportunities across the semiconductor, power, and construction industries.

Broadcom, SK Hynix, and Samsung Electronics: AI Semiconductor and Data Center Business Outlook

Broadcom: Growth in AI Accelerators and Networking

• Industry Leadership: Broadcom has long been a powerhouse in custom chips for data centers and is now emerging as a key player in the AI boom. Google’s TPU is a prime example—initially developed in collaboration with Broadcom in 2016, the continued support for the TPU series has driven Google-related revenue for Broadcom from $50 million in 2015 to $750 million in 2020.

• Diversified AI Solutions: Broadcom supplies customized AI ASICs to big tech companies like AWS and Microsoft, while also selling its networking chips (e.g., Ethernet switches, routers, NICs) for AI data centers.

• Robust Financials: In Q1 2024, Broadcom’s semiconductor revenue reached $7.39 billion, with the networking segment alone contributing $3.3 billion (a 46% YoY increase). AI ASIC and AI networking revenues hit $2.3 billion, accounting for 31% of semiconductor revenue—an explosive fourfold year-over-year growth.

• Segment Breakdown: Approximately 70% of Broadcom’s AI revenue comes from custom AI ASIC/SoC solutions (including DPUs), 20% from switch/router chips, and 10% from optical and interconnect chips. Each segment is forecasted to grow by over 133% in 2024.

• Outlook: Positioned as the hidden backbone of the AI era, Broadcom’s integrated AI chip and networking solutions are set to drive solid growth in both revenue and operating profit margins. Additionally, Broadcom’s Tomahawk series is expected to lead the next wave of data center switching market upgrades (800G to 1.6T).

SK Hynix: Leadership in HBM and Foundry Business

• Market Leadership: SK Hynix is one of the top beneficiaries of the AI boom, leading the high-bandwidth memory (HBM) market and posting record-breaking performance. In 2024, SK Hynix’s revenue reached 66.2 trillion won (approximately $46.3 billion), a 102% increase year-over-year, with an operating profit of 23.5 trillion won—equating to an operating margin of 35%.

• HBM Dominance: Fueled by strong AI DRAM demand, SK Hynix flipped from a loss in 2023 to a significant profit in 2024, with HBM revenue accounting for over 40% of its total DRAM sales. With HBM3 products powering NVIDIA’s H100 and a market share of 50%, demand is so intense that production capacity for 2024–2025 is already fully booked. The management expects HBM revenue to more than double as AI demand continues to rise.

• R&D Focus: SK Hynix is advancing its HBM roadmap from HBM3 to HBM3E and HBM4, while also strengthening packaging solutions in collaboration with TSMC. Although the company is also exploring its foundry business, its short-term strategy remains centered on solidifying HBM technology leadership.

• Strategic Outlook: With a leading roadmap that is reportedly 6–12 months ahead of competitors like Samsung, SK Hynix is set to ride the AI memory wave and re-establish its growth trajectory. A potential future expansion into ASIC and logic components could further elevate its status as a comprehensive semiconductor leader.

Samsung Electronics: Evaluating AI Semiconductor and Sub-2nm Process Technologies

• Dual Strategy: Samsung is preparing for the AI era on both memory and foundry fronts. In the memory segment, Samsung is accelerating the production of HBM3/3E for major customers like NVIDIA and has successfully developed a 12-stack HBM3E, targeting next-generation GPUs. Although its market share is slightly trailing SK Hynix (with an expected HBM market share of 42% in 2024), aggressive R&D into 16-stack HBM4 is aimed at regaining leadership.

• Foundry Ambitions: In the foundry space, Samsung introduced the world’s first 3nm GAA process to challenge TSMC; despite initial yield issues, recent process stabilization has led Samsung to announce a 2nm process roadmap aligned with TSMC’s timeline for second-half 2025 production. Samsung also plans to begin 2nm production at its Texas facility in 2026, expanding its global manufacturing footprint.

• Technology Comparisons: Samsung’s 2nm process is designed to maximize the advantages of GAA technology and aims to match TSMC’s performance and power efficiency. There is speculation that Qualcomm’s next-generation mobile APs may be exclusively produced on Samsung’s 2nm process, which would signal growing industry confidence in Samsung’s technological capabilities.

• Future Competitiveness: While there is an acknowledged 12-year technology gap compared to TSMC, Samsung’s continuous process innovation, massive investments, and major customer acquisitions could potentially narrow the divide.

TSMC vs. Samsung Electronics: AGI Semiconductor Manufacturing Process Outlook (1nm vs. 2nm)

Optimal Processes for AGI Implementation

• Cutting-Edge Requirements: To build processors capable of powering AGI, semiconductor processes must achieve maximum integration and power efficiency. The roadmap indicates that the 2nm process (with gate-all-around transistors) is slated for mass production around 2025–2026, followed by 1.x nm processes (with sub-10Å gate lengths) emerging around 2027–2028.

• 1nm Process Concept: The “1nm process” is less a strict 1nm dimension and more a collection of improved nodes (such as 1.4nm or 1.2nm) derived from the 2nm process. For example, TSMC is preparing a 1.6nm node (referred to as N1.6) targeted for 2026, while Samsung is aiming for mass production on a 1.4nm node by 2027.

• Performance and Efficiency Gains: In chip design for AGI, maximizing computational throughput and minimizing power consumption are paramount. Transitioning from a 2nm to a 1nm-class process could significantly increase the number of transistors per unit area, enabling more parallel computation while reducing power consumption by 20–30% for the same performance. TSMC targets a 10–15% speed improvement and 25–30% power reduction when moving from 3nm to 2nm, and Samsung’s 2nm is expected to achieve similar gains. The 1nm nodes could offer an additional 20–30% efficiency boost, crucial for mitigating AGI’s massive power demands.

• Key Enabler for AGI: It is often said that “AGI realization = mature 1nm process,” underscoring that 1nm-class processes will be the key enabler for economically and efficiently powering AGI. However, due to high costs and yield challenges, early AGI systems might use 1nm processes regardless of cost, while commercial systems may favor the proven 2nm process. For instance, NVIDIA’s roadmap might evolve from Blackwell (TSMC 4nm) in 2024–2025 to its next architecture (TSMC 3nm or Samsung 3nm) by 2025–2026, and then progress to subsequent architectures (2nm or 1.x nm) by 2027, with AGI inference chips potentially designed on 1nm processes around 2030.

1nm Process Development Timeline and Feasibility

• Ongoing Research: Research on sub-1nm nodes is already underway, with major foundries targeting mass production in the late 2020s. TSMC has recently indicated plans to produce 1nm-class chips with a transistor count of one trillion by 2030. Additionally, roadmaps from institutions like Belgium’s IMEC suggest research topics extending to 0.7nm or even 0.5nm by the mid-2030s.

• Technological Breakthroughs: Despite extreme technical challenges, breakthroughs using new materials (such as nanosheets or 2D materials) and novel process technologies (like high-NA EUV lithography) are expected, similar to the leap from 5nm to 3nm. Samsung and TSMC are both expected to extend their GAA FET technology into the 1nm regime, or evolve into even more advanced structures like MBCFETs, to overcome scaling limits.

• Investment and Cost: The development of 1nm processes is projected to require investments in the hundreds of billions of dollars, with wafer costs in the tens of thousands of dollars and chip manufacturing costs running into the thousands of dollars each. Nevertheless, both TSMC and Samsung have embarked on the 1nm race, with the Taiwanese government officially announcing that TSMC is preparing a 1nm fab in the new Xinju Longtan area (targeting operation in 2027–2028).

• Additional Innovations: In the 1nm era, additional packaging innovations such as chiplet-based 3D packaging may also be employed to overcome process limitations, enabling AGI processors to be implemented as interconnected modules.

Can Samsung Catch Up to TSMC?

• Historical Gap: Historically, Samsung and TSMC’s advanced process technology gap has been around 12 years. However, Samsung’s challenge with its 3nm GAA process has somewhat narrowed the gap, though TSMC still appears to hold a slight edge.

• Recent Shifts: During 2022–2023, due to yield issues in Samsung’s 4nm/3nm processes, major orders from Qualcomm and NVIDIA largely went to TSMC, which reliably supplied 3nm chips for the iPhone 16 Pro.

• 2nm Prospects: Looking ahead to the 2nm generation, conditions may change. With Qualcomm likely to produce its 2nm Snapdragon chips on Samsung’s foundry between 2025 and 2026—exploiting capacity constraints and high pricing in TSMC’s 2nm production—Samsung could secure both revenue and technical know-how that might give momentum to its 1.4nm node.

• Government Support and Geography: Additionally, policies promoting domestic foundry capabilities in the US and EU create a favorable environment for Samsung. When comparing Samsung’s Texas fab to TSMC’s delayed Arizona fab, Samsung gains advantages in geographic diversification and government support.

• Competitive Dynamics: Technically, TSMC’s 2nm represents its first transition from FinFET to nanosheet GAA, while Samsung’s 2nm builds on its established GAA experience. Some analyses suggest TSMC’s 2nm may offer a 30% power improvement compared to a 25% improvement from Samsung’s 2nm, but these are initial targets and the final performance could be very competitive. In the 1nm generation, both companies will be entering uncharted territory, offering Samsung a potential opportunity to close the gap further.

• Market Implications: If Samsung successfully secures one or two major customers on its 2nm process and later expands its ecosystem with partners like Intel or Japanese companies at its 1.4nm node, TSMC’s near-monopoly could be weakened. Although TSMC maintains an excellent reputation for quality and timeliness, making it difficult to displace its market dominance, Samsung’s potential to narrow the technology gap could boost its foundry profitability and market capitalization while simultaneously challenging TSMC’s leadership.

• Long-Term Outlook: Ultimately, both companies are expected to successfully transition to the 1nm era for AGI, ensuring a dual-supplier system that benefits the overall stability of advanced AI chip production. While TSMC currently holds a slight advantage, Samsung’s capacity to catch up remains significant, and the performance and customer acquisition dynamics of the 2nm node over the next 2–3 years will be decisive.

SEO Keywords

GPT-5, GPT-6, Groq-4, Groq-5, NVIDIA, TSMC, AI semiconductors, market capitalization, data centers, HBM, 2nm, 1nm, AGI

Key Takeaways at a Glance

• Explosive AI Boom: With the GPT-5/6 era, NVIDIA and TSMC have seen explosive growth in data center AI chip revenue and market capitalization.

• NVIDIA’s Surge: NVIDIA’s revenue skyrocketed from $1 trillion to $3 trillion within a year due to high AI chip demand.

• TSMC’s Breakthrough: TSMC achieved a 58% increase in HPC revenue and a 90% stock price jump, leveraging its exclusive position in advanced AI chip manufacturing.

• AGI Challenges: Achieving AGI will require over ten times the current GPU capacity and ultra-large data centers with robust power infrastructure.

• Industry Players: Broadcom, SK Hynix, and Samsung Electronics are emerging as key players, with significant revenue and profit growth driven by AI demand.

• Advanced Process Competition: The race between TSMC and Samsung in 2nm and 1nm processes will be crucial, with market implications extending to 2030 and beyond.

Overall, the AI semiconductor market is poised for sustained triple-digit annual growth, and investors should pay close attention to companies supplying the AI infrastructure as they continue to dominate market performance and innovation.

  1. How GPT-5 and Groq-4 Have Transformed the AI Hardware Market – Forbes
  2. NVIDIA’s Revenue Surge and TSMC’s Market Cap Growth in the AI Boom – CNBC
  3. One Year After GPT-5: The Lasting Effects on AI Chip Demand – Bloomberg
  4. N
    V
    I
    D
    I
    A
    G
    r
    o
    w
    t
    h
    P
    o
    t
    e
    n
    t
    i
    a
    l
    NVIDIA Growth Potential
Scroll to Top