The bad news is that the memory and storage industry has entered a downward cycle, with soaring inventories and wildly falling prices, and Korean semiconductor makers have suffered a huge loss as a result, posting losses in recent quarterly earnings reports.
The good news is that in the down cycle, due to NVIDIA’s AI acceleration card driven, HBM has risen to become the only memory products that can buck the market significant growth, and South Korean vendors accounted for 90% of the share, to a certain extent, to make up for the loss of the traditional memory business.
In order to support the development of HBM, the Korean government has also recently identified HBM as a national strategic technology, which will provide tax incentives to HBM suppliers such as Samsung Electronics and SK Hynix, etc. The decision is part of the draft enforcement decree for the amendment of Korea’s tax law, which provides higher tax deductions for national strategic technologies compared to general R&D activities: small and medium-sized enterprises can receive up to a 40% to 50% deduction, while 30% to 40% for large enterprises.
2024 HBM is still hot, NVIDIA’s H100 and 200 are still the most sought-after GPUs on the market, and its demand for HBM is naturally rising, for Samsung and SK Hynix, policy support coupled with the development of the market, continue to expand production of HBM seems to have been the nail in the coffin.
Expansion, and then expansion
First of all, the leader of HBM – SK Hynix.
SK Hynix’s results in the third quarter of last year were the brightest of all memory vendors, and on January 25, SK Hynix also released its latest Q4 2023 and full-year 2023 financial results:
Q4 revenue rose 47% year-on-year to KRW 11.3055 trillion, higher than analysts’ expectations of KRW 10.4 trillion; gross profit was KRW 223 billion, a year-on-year jump of 9,404%, with a gross profit margin of 20%, marking the third consecutive quarterly rebound; operating profit was KRW 346 billion (roughly Rs. 1,854 million), better than analysts’ expectations of a loss of KRW 169.91 billion The operating profit margin was 3%; net loss was 1.3795 trillion won (about 7.394 billion yuan), better than analysts expected a loss of 0.41 trillion won, but compared with the previous quarter’s loss narrowed significantly, the net loss rate of 12%. EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) was 358 million won, a year-on-year increase of 99%.
In its earnings report, SK Hynix noted that this performance was achieved mainly due to SK Hynix’s significant sales growth of flagship products such as AI memory chips HBM3 and high-capacity mobile DRAMs in the fourth quarter, which grew more than fourfold and fivefold, respectively, over the same period of the previous year, and the rising demand for AI servers and mobile apps, which improved the overall memory market conditions in the last quarter of 2023.
HBM, the brightest spot in the earnings report, was also the subject of a statement by SK Hynix, which said it plans to increase capital spending in 2024 and focus production on high-end memory products such as HBM, which will more than double its capacity compared to last year, after Hynix had projected its HBM shipments to reach 100 million pcs per year by 2030, and decided to set aside about 10 trillion won in 2024 (about $7.6 billion) in facility capital expenditures – a 43%-67% increase compared to the projected facility investment of 6 trillion-7 trillion won in 2023.
The focus of the expansion is on new and expanded factories, and in June last year a Korean media report said SK Hynix is preparing to invest in back-end process equipment and will expand the Icheon plant that packages HBM3, which is expected to nearly double the size of the plant’s back-end process equipment by the end of this year.
In addition, SK Hynix will build a state-of-the-art manufacturing plant in Indiana, U.S. According to two sources interviewed by the Financial Times, the South Korean chipmaker will produce HBM stacks at the plant, which will be used in TSMC-produced Nvidia GPUs, which is expected to cost $22 billion, according to the chairman of SK Group Group.
In contrast, Samsung has been a bit passive on HBM, with Samsung Electronics entering a transition period after expanding its supply of fourth-generation HBM, or HBM3, starting in the fourth quarter of last year.
On January 31, in the fourth quarter and annual earnings call, Samsung Electronics said it expects the memory business to return to normal in the first quarter of this year, said Kim Jae-joon, vice president of Samsung Electronics’ memory business unit, “We plan to actively respond to the demand for HBM servers and SSDs related to generative AI, with a focus on improving profitability. The memory business is expected to return to profitability in the first quarter of this year.”
The key to returning the memory business to profitability lies in high-value products such as HBM and server memory. Notably, Samsung’s HBM sales in the fourth quarter of last year increased 3.5 times year-on-year, and Samsung Electronics plans to focus the capabilities of its entire semiconductor division, including its foundry and system LSI business units, to provide customized HBMs to meet customer demand.
A Samsung representative commented, “HBM bit sales are breaking records every quarter. In the fourth quarter of last year, sales were up more than 40 percent sequentially and more than 3.5 times year-over-year. Especially in Q4, we targeted major GPU manufacturers as our customers.” The representative further predicted, “We have already provided our customers with samples of the next generation HBM3E in 8-layer stacks and plan to start mass production in the first half of this year. By the second half of the year, its share is expected to reach about 90%.”
For his part, Han Jin-man, executive vice president in charge of Samsung’s U.S. semiconductor business, said in January that the company has high hopes for high-capacity memory chips, including the HBM series, to lead the fast-growing AI chip segment, “We will produce 2.5 times more HBM chips this year than last year,” he told reporters at a press conference at CES 2024, “Our HBM chip production this year will be 2.5 times higher than last year, and will continue to be 2 times higher next year.”
Samsung officials also revealed that the company plans to compete for the HBM market in 2024 by increasing its maximum HBM production to 150,000 to 170,000 pieces per month by the fourth quarter of this year. Previously Samsung Electronics spent 10.5 billion won to acquire Samsung Display’s factory and equipment in Cheonan, South Korea, to expand HBM production capacity, while also planning to invest 700 billion to 1 trillion won in new packaging lines.
HBM4 scramble
In addition to expanding production, they are also hooking up for the next-generation HBM standard.
HBM3e provided samples at the end of last year, and is expected to complete validation and mass production in the first quarter of this year, while more attention is obviously paid to the HBM4, whose stack will be increased from the existing 12-layer to 16-layer, and may use 2048-bit memory stack connectivity interfaces, but at present, the HBM4 standard has not been finalized, and the two South Korean vendors have put forward their own different routes to this.
According to Business Korea, SK Hynix is preparing a “2.5D fan-out” package for its next-generation HBM technology. The move is aimed at improving performance and reducing packaging costs. The technology, which has not previously been used in the memory industry but is common in the advanced semiconductor manufacturing industry, is seen as having the potential to “revolutionize the semiconductor and foundry industry,” and SK Hynix plans to announce the results of its research using this packaging method as early as next year.
Specifically, the 2.5D fan-out packaging technology involves aligning two DRAMs horizontally and assembling them into a structure similar to that of a regular chip. Since there is no substrate underneath the chip, the chip is much thinner and much less thick when installed in IT equipment, and it bypasses the through-silicon vias (TSVs) process, providing more input/output (I/O) options and reducing costs.
While current HBM stacks are placed next to the GPU and connected to the chip, SK Hynix’s new goal is to eliminate the intermediate layer altogether, placing HBM4 directly on GPUs from the likes of Nvidia and AMD, with TSMC preferred as the foundry.
The root plan is for SK Hynix to mass produce sixth-generation HBM (HBM4) as early as 2026. In addition, Hynix is actively researching “hybrid bonding” technology, which is likely to be applied to HBM4 products.
Samsung, on the other hand, is going the opposite way of Hynix and has researched the application of photonic technology in the middle layer of HBM technology with the aim of solving challenges related to heat and transistor density.
The lead engineer of Samsung’s advanced packaging team shared his insights at the October 2023 OCP Global Summit. He said the industry has made significant progress in integrating photonic technology with HBMs through two main approaches. The first is to place a photonic interposer between the bottom packaging layer and the top packaging layer containing the GPU and HBM as a communication layer, but this approach is costly and requires the installation of an interposer and photonic I/O for both the logic chip and the HBM.
The second approach, however, is to separate the HBM memory module from the package and connect it directly to the processor using photonic technology. A more efficient approach than dealing with the complexity of packaging is to separate the HBM memory module from the chip itself and connect it to the logic IC using photonic technology. Not only does this approach simplify the manufacturing and packaging costs of both the HBM and the logic IC, but it also eliminates the need for internal digital-to-optical conversions in the circuitry, and just requires attention to heat dissipation.
Within a Samsung executive blog post, the company is aiming to launch a sixth-generation HBM (HBM4) in 2025, which includes non-conductive bonding film (NCF) assembly technology optimized for high-temperature thermal characteristics and hybrid bonding (HCB) technology to win dominance in the urgent and fierce battle in the fast-growing AI chip space, the company said.
As you can see, the two Korean factories have been engaged in a fierce battle over the next-generation HBM standard.
Micron, sneak attack?
Micron is in a significantly weaker position compared to the two Korean factories mentioned above, and Micron expects its HBM market share to be about 5% in 2023, placing it in third place.
To close the gap, Micron is betting big on its next-generation product, HBM3E, with Micron CEO Sanjay Mehrotra saying, “We are in the final stages of validating HBM3E for Nvidia’s next-generation AI gas pedal.” It plans to begin shipping HBM3E memory in high volumes in early 2024, while emphasizing that its new offering has received significant interest from across the industry, suggesting that NVIDIA may not be the only customer that ends up using Micron’s HBM3E.
In terms of specifications, Micron’s 24 GB HBM3E module is based on eight stacked 24Gbit memory chips manufactured using the company’s 1-beta (1-beta) manufacturing process, which delivers a data rate of up to 9.2 GT/sec and a peak bandwidth of 1.2 TB/s per stack, a 44 percent increase over the fastest existing HBM3 module.
As for the future layout, Micron disclosed the next-generation HBM memory tentatively called HBM next, which it expects to offer 36 GB and 64 GB capacities, capable of multiple configurations such as 12-Hi 24 Gb stacks (36 GB) or 16-Hi 32 Gb stacks (64 GB), in addition to bandwidth per stack of 1.5 TB/s – 2+ TB/s, implying a total data rate of more than 11.5 GT/s/pin.
Unlike Samsung and SK Hynix, Micron doesn’t plan to integrate HBM and logic chips into a single chip, and Korean and American memory vendors are clearly separated when it comes to next-generation HBM development – Micron may be telling AMD, Intel, and Nvidia that everyone can get faster memory access speeds with combo chips like the HBM-GPUs, but relying on a single chip alone means greater Risks.
As machine learning training models get bigger and training times get longer, the pressure to shorten runtimes by speeding up memory access and increasing memory capacity per GPU will increase, and giving up the competitive supply advantage of standardized DRAM to get a locked-in HBM-GPU combo chip design (albeit with better speed and capacity) may not be the right way forward, according to the U.S. press. The way forward.
Micron seems to be trying to “steal a march” on the yet-to-be-determined standard of HBM 4.
In conclusion
There is no doubt that HBM is an opportunity for all memory makers, as long as the AI boom has not faded, NVIDIA’s GPUs are still selling, we can continue to sell the lucrative HBM, and will be able to deliver good financial results.
The two Korean manufacturers not only began to compete in the market, but each other went crazy to expand production, in the technology route also launched a contest, to get the right to speak of the next generation of standards, which we may see this year’s Samsung and SK Hynix in the HBM more action.
Micron after the failure of the bet, once again invested heavily in HBM, compared with the Korean factory, close to Nvidia Micron has its own advantages, taking into account the previous technology accumulation, it may also become the biggest competitor of the Korean factory.
But at present, more than 90% of the HBM capacity has been SK Hynix and Samsung bags, and a Korean civil war, has been unavoidable.