/Controlling Variability And Cost At 3nm And Beyond. (via Qpute.com)
Controlling Variability And Cost At 3nm And Beyond.

Controlling Variability And Cost At 3nm And Beyond. (via Qpute.com)


Richard Gottscho, executive vice president and CTO of Lam Research, sat down with Semiconductor Engineering to talk about how to utilize more data from sensors in manufacturing equipment, the migration to new process nodes, and advancements in ALE and materials that could have a big impact on controlling costs. What follows are excerpts of that conversation.

SE: As more sensors are added into semiconductor manufacturing equipment, what can be done with the data generated by those sensors?

Gottscho: You can imagine doing a lot of compensation in different process control schemes, particularly to get around the variability problem. The industry is just at the beginning stages of that. If you look at the finFET, the three-dimensional nature of that device has challenged us and the industry to come up with robust solutions for high-volume manufacturing. You have to worry about residues in little corners, the selectivity of etching one material versus another, the conformality of the depositions. Everything has become more complicated. And when you start the next generation of gate-all-around at 3nm and below, that’s another order of magnitude in complexity. At first, it looks like a modification of a finFET. But the requirements are getting tightened, and the complexity of that gate-all-around architecture is significantly greater than the finFET. It’s a more complex device than we’ve ever seen, and we keep saying that node after node. Yet we as an industry keep moving forward. Along with that are so many sources of variability, and all of them will matter.

SE: Can you discern variation between different pieces of equipment and within different chambers in the same equipment?

Gottscho: Maybe. The reason the outputs are not the same may be because the integrations are not the same, and if you want to make them the same then the customers have to run the same processes. As soon as they deviate in their choice of materials, the thickness of a film, the critical dimension, or the sequence of operations, you’ll end up with different cost equations. That’s why some customers are more cost-competitive than others.

SE: Does data at this level allow you to say that you don’t need to move the critical dimension forward as fast as if you were doing it a different way?

Gottscho: We’re not there yet, but that’s an ambition. We don’t have that level of maturity in terms of the data mining or the applications.

SE: Some of this is additive, too. So one thing may not be a problem by itself, but in conjunction with other sources of variability, for example, it may get worse. How does data help that?

Gottscho: It’s always been a collaborative process between Lam and our customers, and between our suppliers and Lam. We can’t create equipment and solutions in a vacuum. We really need to understand what the customer is trying to do, so they need to open up to us. And they need to understand the tool’s capabilities, so we need to open up to them. This is where data hoarding becomes a problem sometimes. If you look at chamber matching, that has always been a big challenge because the tolerances keep shrinking. So chamber matching solutions from the past don’t work in the future. The complexity has always been there, but because the tolerances were greater you weren’t as sensitive to the interactions between one process and another. There’s an upstream lithography process that has variability in it. There’s an upstream dep process that variability. All of that impacts what comes out of one etch chamber versus another etch chamber. And then, what about your metrology? Every one of these steps that is necessary to define a result that you want to match to a fleet, or another chamber in that fleet, is actually a composite of all the variation upstream, including the measurement. So now, what does it mean to match the etch result? How do you do that without taking into account everything upstream? And in order to do that, we need to collaborate with upstream suppliers and with our customer. That’s all about data from disparate sources in different formats. Putting that together into an algorithm that allows you to break this into its constituent pieces and feed back to each piece appropriately is a very challenging problem.

SE: In addition to all of that, the whole industry has been focused on multiple sources for everything. Does that make it harder with variation and tighter tolerances?

Gottscho: That’s always been an issue. Nobody expects you can run the same process flow through Foundry A and Foundry B and get the same result. There’s going to be a mismatch. The same is true with the tools. What’s been happening for a long time is our customers will commit to running certain applications on our tools, and they won’t mix our tools with someone else’s. They may split layers, so if you think of the backend of the line, maybe they give Lam applications for the first four levels of metallization, and somebody else applications for the next four levels. But they typically won’t mix suppliers on the same levels. You can’t perfectly match a result for even nominally identical chambers. And now, if you take a process chamber from one company and compare it to a process chamber from another company and then expect them to match at the atomic level, that’s hoping against all hope.

SE: We’re heading in that direction with the foundry business, right?

Gottscho: Not at the leading edge.

SE: But there are fewer companies moving down to the leading edge.

Gottscho: At first, yes. But there will be others. In 10 years or 15 years, they may be doing gate-all-around. The trailing edge will satisfy industrial requirements for a long, long time. But the other guys eventually will follow along. It’s ironic that we talk about 28nm as trailing-edge nodes. Those were really hard devices to make. It’s just about timing. Things will mature, the costs will come down, and yields will go up. And then perhaps others will get into that business.

SE: As we head into single-digit nanometer processes, the dielectrics are thinner, the tolerances are tighter, so when you deposit films they have to be that much more exact. What does that mean for Lam?

Gottscho: First of all, you need atomic-level control. Within die, tolerances are within fractions of a nanometer. That tolerance has to be held within the die, despite different geometries and dimensions within that die. You may have lines that are closer together and other lines that are farther apart. That’s always been a challenge. It’s why deposition and etching occur at different rates with different profiles when lines are close together instead of far apart. And then you have to scale that all the way across the wafer. The edge always has been a problem for fundamental reasons. When the wafer stops, there is a discontinuity electrically and chemically that creates a non-uniformity. And the non-uniformities are different for every species in the process chamber, so it’s difficult to compensate out those things. But you’re trying to hold 7nm dimensional control all the way out to less than 2mm from that wafer edge, and those edge effects go in a centimeter or more from the edge of the wafer. All of that has gotten more challenging.

SE: Is the distance to the edge becoming more problematic as process geometries shrink?

Gottscho: No, the scaling is pretty much the same. It depends on the exact process conditions and process chamber configuration, but they’re on the order of centimeters as opposed to millimeters. Some effects are in the millimeter range, but generally, you’ll see non-uniformities start to kick in 10mm or 20mm from the edge of the wafer. That’s been pretty much a constant story. What’s changing is the tolerance that’s allowed over that last 20mm. It’s sub-nanometer now. That’s inherent in any chemical process chamber where you have this discontinuity created by finite wafer size. Nobody is willing to process 12-inch wafers on a 14-inch substrate. So what it means for us is that we had to be innovative and change our hardware in order to meet those tolerances within die. That’s the origin of atomic-layer etching and atomic-layer deposition. By definition, you’re removing one layer at a time, depositing one layer at a time, and you do it under conditions where it is insensitive to the fact that the lines are closer together or farther apart. If you take one step back from that, it’s all enabled by time-dependent processing, including pulsed power, pulsed gases, cyclical processes. That enables us to solve those problems and meet the customer requirements. The downside is the cost, because when you do a cyclical process—and people have played around with space versus time—there are challenges with that approach. In essence, you’re losing throughput because you’re cycling things back and forth. There are strategies to get that throughput back, including smaller process chamber volumes, and more precise control so that you can switch faster and faster with shorter and shorter cycles.

SE: The big problem with ALD and ALE always has been time spent using these technologies, right?

Gottscho: ALD is widely deployed today and it’s fast enough to be economical. But there’s a tradeoff here. If it can’t yield, it doesn’t matter how fast you run the wafers through the tool. Atomic layer etching processes are in production, as well, but they’re not as widely deployed yet as atomic layer deposition. Throughput is one of the reasons. And as long as you can find some other way to solve these problems without atomic layer etching, assuming it has a throughput impact, then you’re going to do that. As you get to gate-all-around, you have to do a lot more atomic layer etching, both anisotropically and isotropically. We’re going to have to find solutions to the throughput problems. I see that gap maybe not being closed completely, but it’s not going to be a factor of 10, as it is today. It’s going to be more like 2, and I see us closing that gap in the next couple years.

SE: Some of the chips being built today look a lot different than even a couple years ago, with different kinds of processors and memories. Does any of that change because of new processes and structures, or is it the same from a manufacturing side?

Gottscho: That’s not a first-order effect on our business. The memory chips always have had a much more regular structure. That makes it easier to fabricate them in some ways, although there is always the array area and the cell area that you’re trying to fabricate at the same time. That adds a lot of complexity because of that disparity of geometry. Logic has always been very challenging because of irregular patterns and many different kinds of dimensions within the die.

SE: How far down do you see processes going. We’ll hit 3nm, but will we hit 2nm and 1nm?

Gottscho: People have been predicting the end of scaling since 1 micron because lithography couldn’t print below that. That turned out to be nonsense. On the other hand, those new dimensions are really small. It’s not so much the physics. It’s the cost, and the variability problem is a big part of that. There is no doubt in my mind that 3nm will happen. Whether 1nm or 1.5nm will happen, I don’t know. But what’s happening in between is you’re seeing a change in compute architecture from von Neumann to neuromorphic, and you’re going to see more in-memory computing. We already have near-memory computing. Memory is going to become increasingly important. And so the nature of the devices and the kinds of solutions are going to change. Just as 3D NAND isn’t limited by lithography, future logic won’t be driven by scaling as it is today.

SE: There are a lot of options such as advanced packaging, and the future may involve considerations about data movement and storage and cost.

Gottscho: Yes, it’s about the system, not the device. It’s not even about the chip.

SE: And systems of systems interacting. So tolerances are now defined by the whole system rather than the chip, right?

Gottscho: Yes, but one of the attributes of neuromorphic computing is that it’s more fault-tolerant. You don’t need precise answers. The variability requirements may get relaxed in time. Designs could be more forgiving with respect to variability. There will be more redundancy, just like your brain. So how precise does your answer have to be? For a lot of applications, you don’t need precision. What you need is data throughput and bandwidth to crunch all the data.

SE: This is a fundamental shift. Instead of racing to the fastest processor with the fastest memory, it comes down to what are we trying to accomplish with this application.

Gottscho: That’s why you’re also seeing people doing their own version of an AI chip these days. You don’t have a general-purpose CPU. NVIDIA kicked this off with its GPU technology. Google has its own design. Apple is designing its own chips. That’s a boon for our business.

SE: They’re also bringing in younger engineers who don’t have preconceived notions about how to get this done or which tools to use.

Gottscho: Yes, and they’re also not inhibited by conventional wisdom saying that something is impossible.

SE: Do the improvements to equipment and data make devices more reliable than in the past? That’s what the automotive industry, for example, is looking for.

Gottscho: The promise is there. The reality has not yet caught up to that promise. But for sure, the potential is there. That’s perhaps more in the domain of our customers, who have to link together all of the processes when they’re making a chip. They are looking at sources of variability, sources of unreliability in a chip, and where do they come from not just in unit processes, but how unit processes are linked together and what signatures can they discern from all of the data. We typically don’t have access to reliability data unless there’s a specific problem we’re contributing to and helping to solve. In our world, it would be more about the reliability of our equipment and the reliance our customers can have on a reproduceable result—how a given chamber and every chamber within a certain narrow distribution looks like every other chamber. And every wafer looks like every other wafer. Big data mining, AI, ML—all those techniques will disrupt that part of our business. As an industry, we’re not there yet, but we will get there.

SE: In the past, equipment makers focused most of their attention on the next node. In the future, does it go backward as well as forward?

Gottscho: We absolutely have that opportunity as an industry. Our customer service business group specializes in providing productivity and technology upgrades more focused on the trailing edge. Our product groups are focused on the leading edge and emerging applications such as new memory technologies. But a lot of those upgrades came out of leading-edge developments, and they have become available on the older equipment and add higher yield, higher reliability, and lower cost of ownership. The headwinds in that activity come from the capital cost equation. The 200mm fabs are fully depreciated. Now, when they start investing in capital equipment upgrades—and that’s just upgrading existing equipment, although they could do more by putting in leading-edge equipment—their whole cost equation gets blown out of the water. That’s going to change the pace at which it happens. But it will happen. And with 200mm capacity under strain, it’s possible that more 200mm fabs will be built. It’s also possible that 300mm fabs will be built to satisfy the trailing-edge demand that would have been satisfied at 200mm, which will put more pressure on the older fabs to do upgrades or swap-outs just to stay competitive. But that’s not going to be a sudden thing. It’s going to be a slow evolution.

SE: Where do you see quantum computing? Is that still a science project?

Gottscho: It’s not a science project, but it’s something we’re trying to better understand. It’s certainly not a big business today, but it will become more important. One of the more exciting opportunities we see for quantum computing is in materials design. Materials fundamentally are a quantum-mechanical output. Quantum computing is well matched to that computation.

SE: How about manufacturing of quantum technology?

Gottscho: Exactly what the mainstream quantum devices will be isn’t entire clear. Today we can make existing quantum devices using existing equipment. But the volume isn’t going to be as large as AI chips or memory for many years.

SE: And this ties back to isolation of signals with thinner insulation, so you have to go deeper into the silicon, right?

Gottscho: Yes, and that includes high-aspect-ratio etching and filling.

SE: Do man-made materials become an increasingly competitive market for Lam?

Gottscho: This is less about creating a new material and more about putting down a new material, whether it’s a plasma-deposited material, a sputtered material, a plated material. You can put copper down in different ways and it’s basically a different material. The equipment and the process you use is every bit as important, if not more important, than the actual material. Cobalt is another example. There are many different ways to put cobalt down. You can electroplate it, electro-less plate it, you can put it down through ALD or CVD. They’re all going to have different properties. It’s the same for silicon nitride. There is low-pressure CVD or plasma nitride. That’s dramatically different based on the amount of hydrogen that’s in the film. So it is the combination of the material and the deposition method – the resulting film that is what drives competitive advantage. Even with traditional materials, there are opportunities. If you look at carbon hard masks, for example, carbon comes in many different forms. There are many ways to put down a carbon patterning film with very different properties. That’s still a pretty wide open area for development. A lot of our materials work is focused on patterning films. We have several new materials in development. Back-end of the line metallization is a big problem for the industry. The traditional barrier-liner/seed combinations, put down primarily by PVD, has reached its limits. It can be expensive, it’s not meeting the customers’ low-resistivity requirements, and you see new ways to put down metals. We’re replacing PVD with ALD and electroplating. There are extensions to electroplating that result in different filling properties.

SE: All of this is being done on a very small scale. How big of a problem is metrology these days?

Gottscho: Metrology has been a problem for long time. Our peers in the metrology business keep advancing the state of the art, but like everything else, it’s becoming more and more expensive because it’s harder to detect defects and contamination. A viable option involves big data, where you deduce how much is on the wafer from less-direct measurement. Virtual metrology is being realized. That can replace real metrology for a fraction of the cost. So every time you run a wafer, you can mine that processing data and connect that to the physical direct measurement. Now you have a proxy for that, and you can afford to make more measurements effectively. You have a lot more data, and you just have to establish the correlation with yield. You have that correlation challenge even with traditional metrology because you’re in-line metrology isn’t measuring electrical result. It’s measuring some physical parameter that is connected to ultimate yield and performance.

SE: In AI systems, you’re looking at this from a distribution standpoint. You don’t know if they will fall within the proper distribution at the time of manufacturing because these systems adapt.

Gottscho: Yes, and that needs to be fed back. Very much related to that is the combination of very precise metrology like e-beam inspection, which is really good at finding things but prohibitively expensive to scan across the whole wafer. What you do is use that to effectively calibrate a model. That model can be purely physics-based, but more likely it’s a hybrid model that has physics in it along with a machine-learning algorithm. You put those two things together and now you have the ability to understand your whole defect map and you can take corrective action accordingly. That’s happening today. It’s combining expensive metrology with relatively inexpensive data-based algorithms, and connecting those two things together so you can use one to calibrate the other. That has to be more prominent in the future.


This is a syndicated post. Read the original post at Source link .