FPGA makers reported to actively develop 3DIC architectures
Xilinx declined to comment, but a half-dozen independent industry sources familiar with its efforts have confirmed the 3D development is well under way. Rich Kapusta, Actel’s vice president of marketing, applications and business development confirmed his company has been approached by SoC makers to use the company’s non-volatile flash-based FPGA as a layer in their 3D SoCs. He declined to comment further.
Getting 3D chips this kind of work done is anything but guaranteed. It’s complicated and there are lots of pitfalls, such as accessing RAM or logic across multiple die. Nevertheless, the implications of these developments are enormous. Because of the very regular and controlled structure of an FPGA, it is extremely well suited to defining where components can be placed on a chip. That makes it much easier to predict hot spots caused by putting two or more chips together—a problem that becomes particularly thorny when chip layers are developed by multiple vendors without knowledge of the thermal characteristics and layout of the other components.
3D stacking makes it far easier to bump up performance at advanced nodes using shorter wires while reducing power because it takes less power to achieve that performance over shorter distances. But getting this accomplished with SoCs has been particularly difficult. As a result, sources say the need for FPGA prototypes may change FPGAs into the end game rather than an in-between step.
Moreover, both moves also are expected to open huge markets, finally, for advanced EDA tools to work on complex FPGA designs, as well as third-party IP, processor cores from companies like ARM, MIPS and Virage Logic, and interconnect fabrics such as network on chip. They also can open up 3D to mainstream development. While companies such as IBM, Freescale, Qualcomm and Texas Instruments have been working on 3D chips for years—IBM started its R&D in this area almost a decade ago—most of that work has been a closely held secret because it is considered a competitive advantage for performance and power. FPGAs can quickly turn that into a less expensive option that may have more overhead than bottom-to-top 3D ASIC designs, but far less than 2D ASICs.
Issues in 3D
FPGAs can solve one of the biggest problems in 3D stacking, namely standards for placement of components. Without those standardized approaches there will likely be some ugly finger-pointing when two chips are put together.
“One of the problems that we see coming is who’s going to pay for a bad part,” said Andrew Yang, chairman and CEO of Apache Design Systems. “Testing may show that memory and logic are all good and that the die works, but when you put it together with another chip it may turn into a bad part. So you can say it’s good, and all your testing and verification may show that it is, but when it doesn’t work who pays?”
Yang said there is a need for far more analysis of the stacked die, measuring everything from heat and power to electrostatic discharge and signal integrity.
“We also need to understand what are the killer applications and what applications are not good for 3D,” he said. “The compelling value of 3D is shorter distance, which is the TSV promise. The challenge is in coupling chips together. In 2D you could shield high-speed signal transmissions. You get a cross-coupling effect with a TSV, so there is promise but there are also challenges.”
One of the big draws for 3D in general is the ability to re-use IP, which may come in the form of entire chips. That doesn’t work too well, however, when those chips were created for the best utilization of real estate on a 2D structure, where heat dissipation is relatively simple. In 3D, putting chips together can sandwich heat between die with no way to get it out of the chip.
“When you stack die you concentrate the heat,” said Carey Robertson, product marketing director for Calibre Design Solutions at Mentor Graphics. “That affects chip reliability, either short-term or long-term because they’re operating at temperatures they’re not expected to operate at. Circuits perform differently at 100C or 125C or 130C. At 130C it may affect the core, the timing, the signal integrity.”
While the overall heat of a chip hasn’t changed much, the more tightly everything is packed together the more difficult it is to cool. “When you stack them, you concentrate that heat even more,” Robertson said. “Potentially, when you move the wires closer together you can reduce resistance and IR drop. There would be a decrease in power and heat, but we have not seen enough of that yet to draw that conclusion.”
Under the covers, there are two technical ways to make this all possible, according to an ARM insider. “The first is for TSVs at similar pitch to solder bumps (about 50nm). This expands the capability of FPGAs and creates what amounts to multi-FPGA chips, as well as allowing for better-integrated flash, DRAM, and high-performance logic. The limited inter-chip bandwidth and power delivery, along with thermal issues, keep this as more of a cost dynamic – an extension to existing SiP approaches,” said the source. “The second answer is for high-density future TSVs, at a pitch of less than 5nm. These increase inter-chip bandwidth by a factor of 100 over the first solution and allow for some game-changing capability, including wide word high-speed off-chip memory access, combined FPGA/logic solutions, multi-die FPGA (greatly increased gate count) and so on. The reconfigurable aspect of FPGAs may also help solve the test and fault tolerance issues that are a very significant impediment to making tight pitch TSVs viable. Neither of these eliminates the crossover argument on power and performance, but they both have the potential to move it.”
Programming the future
Whether this effort ultimately succeeds is anyone’s guess. What is known is that a lot of resources are being marshaled into 3D stacking and a lot of hopes are being pinned on the back of efforts such as those from Xilinx and Actel’s partners.
Tom Quan, deputy director of design methodology at TSMC, said the great advantage of FPGAs is that they are very regular. “You can predict the thermal profile much better than with a mixed-signal SoC. Analog can be all over the map. But while the base array may be regular, in another corner of the chip you might have a USB so the outside of the chip might be hotter than the inside.”
Still, there was a lot of hype behind multi-chip modules in the 1990s and so far they have failed to materialize as a popular solution, largely because of cost. That could change as double patterning becomes the norm at 22/20nm and standard production costs rise, but visibility remains limited at that node.
At the very least, the moves by FPGA players are worth tracking, and a lot of companies are predicting major changes if these scenarios work. There are reasons FPGAs may hold more promise than multi-vendor or multi-generational SoCs. But there are still a lot of challenges to resolve before the total cost of development is known