To Fix the Supply Chain Mess, Take on Wall Street


Last February, President Joe Biden issued an executive order commanding agencies throughout the government to report on ways to fix America’s supply chain mess. He ordered the secretary of the Department of Health and Human Services to tell him what to do about the country’s near-total dependence on China for the key ingredients needed to produce vital pharmaceuticals. He tasked the secretaries of the Energy and Defense Departments with finding solutions to our growing dependence on foreign corporations for the materials needed to make everything from electric vehicle batteries to computer-guided munitions.

And he ordered the secretary of the Department of Commerce to come up with solutions to what is perhaps the most urgent supply chain bottleneck of them all: the acute shortage of semiconductors that is driving up the prices and limiting the availability of a broad range of consumer goods, ranging from cars to TVs, laptops, phones, and even household appliances like washing machines and toasters.

When the reports came back 100 days later, the agencies listed a variety of different factors at work, but all agreed on one root cause. As the White House politely summarized it, the big problem was “misaligned incentives and short-termism in private markets.” Noting that America’s major corporations had spent the past decade distributing nearly all of their net income to shareholders in the form of stock buybacks and dividends, the White House concluded that “a focus on maximizing short-term capital returns had led to the private sector’s underinvestment in long-term resilience.” In other words, the ultimate explanation is Wall Street greed.

Yet strangely, when it came to proposing solutions, the White House had nothing to say about reining in the power of financiers over American business. Instead, the administration called for more government spending on science and technology, plus a wide range of new direct and indirect corporate subsidies for computer chip makers. Since then, with White House encouragement, a bipartisan coalition has passed a bill in the Senate, the U.S. Innovation and Competition Act, that takes the same approach. It offers more than $50 billion in subsidies to domestic semiconductor manufacturers, for example, but without taking any measures to ensure that the companies don’t continue to offshore production or use the funds to increase CEO pay or buy back their own stock. On November 15, Senate Majority Leader Chuck Schumer announced plans to push the bill through the House by attaching it to a must-pass defense policy bill.

There’s nothing wrong per se with government using corporate subsidies to achieve public purposes. But this legislation doesn’t address the core problem the administration itself identified. Over the past 40 years, financial deregulation combined with lax antitrust enforcement and poorly conceived trade policies have shifted power away from people who know how to invent, manufacture, and deliver products and toward people who know how to make money through financial manipulation. Until we take on that problem, our country’s ability to ensure uninterrupted access to the microprocessors and other vital components and raw materials on which our security and prosperity depend will become only that much more vulnerable to disruption and even collapse.

To see how this dynamic works, consider the fate of a company that not so long ago was a symbol of America’s absolute technological dominance of the digital age, from its inception in the early 1950s well into the first decade of the 21st century. The decline of Intel nicely illustrates what happens when a tech corporation pays more attention to raising its stock price than coming up with better products.

Many of the first chip technologies were pioneered by Intel. They included the 8088 microprocessors released in 1979, which long served as the foundation of personal computers. They also included the first commercially available dynamic random access memory chip, released in 1970, the basis for storing data on computer hard drives.

The personal computer revolution of the 1980s and ’90s created a boom in demand for Intel’s products, and for years the corporation prospered. But as time went by, Intel increasingly used its resources to focus on protecting its monopoly profits in the microprocessor market for PCs and on buying back its own stock to boost the price.

To protect its monopoly, Intel used illegal tactics such as loss leading and subsidizing the advertising costs of PC makers that used Intel chips. Through such maneuvers, Intel so weakened its one remaining U.S. rival, Advanced Micro Devices, that AMD was forced to sell off its own manufacturing facilities to the semiconductor firm Global Foundries, which was controlled by a state-owned investment firm in the United Arab Emirates. With AMD knocked down, Intel could deliver even more of the short-term profits Wall Street demanded.

To further boost its stock price, Intel purchased $48.3 billion of its own stock from other investors between 2001 and 2010 rather than put the money to work making more advanced chips for the mobile revolution, or building new foundries. Intel even spent $10.6 billion on stock buybacks in 2005, the same year it successfully lobbied the government for increased subsidies for its nanotechnology research. In all, 113 percent of Intel’s net income during that decade went to stock buybacks and dividend payouts.

Then the process actually accelerated. Between 2011 and 2015, Intel spent $36 billion in buybacks and $22 billion in dividends to shareholders. From 2016 to 2020, these numbers increased to $45 billion and $27 billion respectively.

Meanwhile, Asian chip makers such as TSMC and Samsung focused mainly on reinvesting their profits in capital expenditures, eclipsing Intel in this spending starting in 2015. TSMC and Samsung also focused on emerging technologies. While Intel was monopolizing the market for PC microprocessors and servers, other new semiconductor companies were breaking into the newly emerging smartphone and tablet market and utilizing TSMC’s and Samsung’s foundry services to make more advanced chips. The manufacturing expertise of these two companies benefited immensely from making the smaller, more efficient chips needed for tablets and phones. TSMC now has a market share of more than 80 percent in these smaller chips, which are also being used to power the highest-end PCs, while Intel is still a year away from developing them.

TSMC and Samsung are not likely to give up their lead. TSMC is moving fast to protect its technological prowess, with plans to spend $28 billion on capacity to keep up with demand for 200mm wafers, the chips used in appliances and automobiles, and to expand production of more advanced process technologies, which so far only TSMC and Samsung have mastered. To compete with TSMC, Samsung has announced plans to spend $151 billion just on logic-based non-memory chips, the main chips used in computers and electronics and the largest segment of the semiconductor market.

The numbers are causing a growing innovation gap. TSMC’s 2021 capital expenditures are expected to be 133 percent greater than Intel’s. With that investment, TSMC is planning to produce highly advanced 3-nanometer chips by next year while also starting construction on a facility to make even smaller, more advanced chips. Even Apple, which long used Intel chips in its computers, is now contracting with TSMC for the production of the super-advanced M1 chips that drive its latest generation of Macs. As Bloomberg recently summarized the situation, “As chip rivals struggle, TSMC moves in for the kill.”

Intel’s decline was foreshadowed by the sad fate of Motorola. For years, the company was a pioneer of technological innovation, and a foundation of U.S. economic strength. Motorola created the first pager and the first handheld mobile phone, and its semiconductors helped power the first Macintosh computers. By the 1990s, Motorola led the market in the sale of cellular products.

Motorola became such an innovative company because for most of its history it was led by engineers who interacted with one another through highly decentralized systems of R&D and manufacturing. Indeed, it was long known as a “loose confederation of warring tribes,” with the engineers in each division having control over major decisions. This approach to corporate governance largely worked, until Wall Street began to ratchet up the pressure in the 1990s. Motorola was still profitable, but investors began to push the company to pump up share prices. The result was a plan to restructure Motorola’s priorities around marketing rather than manufacturing. When Motorola’s automobile chip business then came under fierce price competition from TSMC, which was able to loss-lead its products thanks to heavy subsidies by the Taiwanese government, Wall Street demanded that Motorola executives break their company into parts.

In 1998, Motorola sold off part of its semiconductor division to a consortium of private equity firms. Then, in 2001, Motorola began to shut down additional plants, lay off workers, reduce capital investment, and contract with outside manufacturers. Finally, in 2003, with pressure from Wall Street still mounting, Motorola spun off the remainder of its semiconductor division. The spin-off included the sale of its fabrication plant in Tianjin, China, to a Chinese corporation.

Other American semiconductor makers experienced much the same fate, and for the same reasons. LSI Logic, for example, was once a major player in the microprocessor market for data storage and consumer electronics. Its day of reckoning came in 2005 when its CEO, Abhi Talwalkar, decided to shutter its chip-fabrication plants—called “fabs”—and outsource production. Talwalkar emphasized the importance of cutting costs to please Wall Street, saying, “The adoption of a fabless model … is the right manufacturing strategy … to enhance value for [LSI’s] shareholders.”

Much the same story even played out at IBM, the company that invented modern computing. In 2014, under the leadership of CEO Ginni Rometty, the corporation announced a “strategic realignment” that meant retreating from any lines of business that might require increased funding to compete, like semiconductor fabrication. IBM, which once had the world’s third-largest maker of semiconductors, sold its semiconductor plants to Global Foundries for $1.5 billion, citing the division as unprofitable. That same year, the corporation engaged in billions in stock buybacks.

Justifying her actions later, Rometty said, “When it comes to managing for the long term, we sold our semiconductor manufacturing operations last year. We did it in order to move to higher value.”

Why did America’s most innovative tech companies become primarily focused on pleasing shareholders rather than pursuing innovation and long-term growth? A series of policy mistakes, some familiar and others obscure, provide the answer.

One lesser-known yet consequential policy change came in 1982 when the Securities and Exchange Commission (SEC) quietly issued a new rule that established so-called safe harbor protections for executives against charges of stock price manipulation. This policy change led to a radical shift in how CEOs are compensated—with traditional salaries being largely replaced by stock options. This change meant that CEOs now personally profited when they took measures to jack up short-term stock prices. “If you keep earnings up and pump up stock prices, you’re a success,” says Bill Reinsch, former undersecretary of export administration at the Department of Commerce.

Another key policy change came in 1992, when the SEC issued proxy rules that for the first time made it legal for shareholders to communicate freely with one another and to make public statements. What this meant in practice was that powerful financiers and large institutional investors could form cartels that concentrated both financial and political pressure on executive teams to cut costs and raise stock prices.

Then, in 1996, came the National Securities Markets Improvement Act. The bill allowed hedge funds to pool unlimited funds from institutional investors, ushering in the era of private equity. As financiers became able to take bigger stakes in corporations, stock “raiders” like Carl Icahn and Daniel Loeb found that they could exert even more direct pressure on managers and board members.

Meanwhile, trade agreements made by presidents of both parties weakened the federal government’s ability to pursue a coherent industrial policy by surrendering sovereignty to entities like the World Trade Organization. Because of such agreements, when China, Taiwan, and South Korea began using direct state subsidies to ramp up their own semiconductor industries in the 1990s and target American firms, the U.S. government was largely powerless to respond.

These measures, in combination with a relaxation of antitrust enforcement that began in the early 1980s, led to what is now often called the “financialization” of the U.S. economy. During this period, many “thought leaders” defended these measures to give Wall Street more power over other sectors of the economy as a way to more efficiently allocate capital, labor, and other resources. What we got instead was an industrial system controlled by a few highly predatory Wall Street bosses who demanded short-term profit maximization. In the real world, the result was the destruction of many of our most important industrial capacities and arts, and a fundamental undermining of our economic and national security, as demonstrated over the past two years in the long series of industrial production failures and supply chain collapses.

What’s to be done? It may be too much to ask Congress and the Biden administration to undo 40 years of bad trade and competition policy with a single stroke. But it’s not too much to ask that they at least not throw unconditional subsidies at the same people who engineered this mess. Under the current rules of the game, there is a good chance that many, if not most, of the subsidies we offer to tech firms will just flow to self-dealing CEOs and the Wall Street bosses they serve.

Minimally, the semiconductor bill the Senate is about to send to the House needs to be redrafted to impose firm conditions on the corporations that receive these public monies. For example, the bill could mandate that they not engage in buying back stock, in sending jobs offshore, or in paying executives more than 50 times what their median worker earns. If we are going to pay corporations to build factories in the U.S., we might even, as Vermont Senator Bernie Sanders has called for, demand in return that the public get some equity in those companies.

More broadly, why shouldn’t we push for legislation that would require any company selling essential finished goods in the United States—whether it’s prescription drugs or personal protection equipment, or cars or components needed for national defense—to document that they have multiple, geographically diverse suppliers? We don’t need all supply chains to be domestically sourced, but we do need to make sure, as a matter of public health, national security, and, ultimately, sustainability, that our supply chains do not become too concentrated in a single nation or region or under the control of a single corporation.

If there’s good news, it’s that key players in Washington finally appear to be waking up to this threat. Earlier this year, the Pentagon’s Office of Industrial Policy published its annual report on industrial policy and critical technologies, which strongly highlighted the dangers of the shareholder-activist philosophy and the threat the financial industry poses to sectors critical to our national security.

“Together, a US business climate that has favored short-term shareholder earnings (versus long- term capital investment), deindustrialization, and an abstract, radical vision of ‘free trade,’ without fair trade enforcement, have severely damaged America’s ability to arm itself today and in the future,” the Pentagon warned. “Our national responses—off-shoring and out-sourcing—have been inadequate and ultimately self-defeating.”

Or, as the industrial expert Bill Lazonick puts it, “Americans ought to know that what [Wall Street] is being allowed to do is the real enemy of America.”

Laisser un commentaire