By Sebastian Anthony on July 10, 2014 at 7:41 am
IBM has announced that it’s plowing $3 billion into two R&D programs that will hopefully make it the authority on 7-nanometer-and-beyond chip technologies. One R&D project will look at pushing conventional silicon chips as far as they will go (around 7nm), and the other will be tasked with finding new materials and techniques that can take us even further (quantum computing, carbon nanotubes, graphene, III-V). IBM also took the opportunity to remind everyone that it’s already the biggest player in 7nm-and-beyond technology, with over 500 applicable patents (more than double the nearest competitor). While most major chip makers (Intel, TSMC, GloFo, IBM) seem confident that they can take standard silicon CMOS chips down to 10nm, they are a little bit nervous about the prospect of 7nm and beyond. At around 7nm, the current building blocks of silicon transistors just won’t behave in the same way; when the gate is just a few atoms across, classical physics goes out the door and quantum physics (which behaves rather differently) takes over. While different transistor designs (such as 3D) allow us to take silicon a little further, the laws of physics will eventually catch up.
To reach 7nm and beyond, IBM Research is taking a pincer approach. First, it will take serious effort to actually get silicon down to 7nm — and more importantly, to develop processes that can make 14nm, 10nm, and 7nm chips economically. As we covered last month, making silicon transistors smaller isn’t inherently all that difficult — but doing it without breaking the bank on expensive equipment is. IBM wants to develop new tools and techniques that will help silicon scale down to 7nm, and potentially beyond.
The second approach is potentially more exciting, at least as far as ExtremeTech is concerned. Rather than pushing silicon indefinitely, this second research project will look at other materials and techniques that might more easily take us to 7nm and beyond. Materials such as III-V semiconductors (notably gallium arsenide, GaAs) have around 10 times the electron mobility of silicon, allowing for smaller transistors with much higher performance and lower power consumption. Likewise, IBM is looking into graphene and carbon nanotubes, both of which have incredibly high electron mobility and can (theoretically) be fashioned into very small structures. IBM has already created a carbon nanotube transistor with a 10nm channel that showed no sign of performance degradation due to its diminutive size (CNTs are just single layers of carbon/graphene rolled up into a tube, and are thus very, very small).
But beyond better materials and the ever-shrinking transistor, IBM is also looking into different methods of computation entirely, such as neuromorphic computing (brain-like chips), quantum computing, and silicon photonics/optoelectronics (optical tech built into electronic chips). These methods won’t necessarily provide more gigahertz or consume less power, but they could offer wildly more capable computers that can process much more data than a conventional computer in the same amount of time. Finally, it’s important to note that IBM is already working on all of these technologies, and in many cases has been for years. The announcement of an additional $3 billion in expenditure is exciting, but it’s pennies in the grand scheme of computing R&D. IBM already spends roughly $6 billion per year on R&D, and companies like Intel, Samsung, HP, Microsoft, and Google are all around that mark. I would be surprised if IBM wasn’t already spending billions per year on the materials and techniques outlined in this story. Ultimately, this announcement feels like an exercise in marketing to reassure both customers and stock holders that IBM still very much wants to be a leader in bleeding-edge computing.