Corporate Corruption Media ArticlesExcerpts of Key Corporate Corruption Media Articles in Major Media
Below are key excerpts of revealing news articles on corporate corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
Ultraprocessed foods, seed oils, herbicides and pesticides, and fluoride: They're all targets of the "Make America Healthy Again" movement, whose chief proponent is US Health and Human Services Secretary Robert F. Kennedy Jr. Now, MAHA Films, a production company dedicated to promoting the movement's values, has released its first documentary. "Toxic Nation: From Fluoride to Seed Oils – How We Got Here, Who Profits, and What You Can Do." [The film] highlights those four food- and environmental-related issues that Kennedy's nonprofit MAHA Action ... says "silently endanger millions of Americans every day." The documentary's release follows the May 22 publication of the first MAHA Commission report, which lays the groundwork for an overhaul of federal policy to reduce the burden of chronic disease on American children. Composing up to 70% of the US food supply, ultraprocessed foods are made with industrial techniques and ingredients never or rarely used in kitchens, or classes of additives whose function is to make the final product palatable or more appealing. Ultraprocessed foods are typically low in fiber; are high in calories, added sugar, refined grains and fats, and sodium; and include additives. The [also] film raises concerns about the herbicide glyphosate, citing previously documented links to cancer. Sources also said glyphosate may cause endocrine disruption and damaged gut microbiomes, with the latter potentially increasing risk for irritable bowel diseases and celiac disease.
Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on food system corruption and toxic chemicals.
I had to pay a student to go island hopping to find basic records in the U.S. Virgin Islands. The territory's opaque laws and corruption makes it a haven for misdeeds. Albert Bryan Jr., the current governor, used his position to curry favor for Jeffrey Epstein for years. He helped bestow tax exemptions on Epstein's shadowy businesses and pushed for waivers allowing the former financier to dodge USVI sex offender laws. Bryan, whose hand-selected Attorney General swiftly ended the J.P. Morgan lawsuit that revealed a gusher of damning documents about Epstein's network, is now tapping Epstein victim settlement funds ... to pay for various earmarks and unrelated government debts. Former Attorney General Denise George led a series of lawsuits against Epstein's estate and former associates. Bryan fired her. In 2024, Bryan named a new Attorney General–none other than Gordon Rhea, a private practice attorney who previously defended Richard Kahn during the Epstein estate lawsuit. Not long ago, Kahn and Indyke were described by the U.S. Virgin Islands as "indispensable captains" of Epstein's alleged criminal human trafficking enterprise. We still have many unanswered questions. Why did U.S. Virgin Islands police and customs agents never act to protect the young girls they saw taken to Epstein's islands? What is clear, however, is that an attorney who worked to protect Epstein's estate is now the chief law enforcement officer of the U.S. Virgin Islands.
Note: Read our comprehensive Substack investigation covering the connection between Epstein's child sex trafficking ring and intelligence agency sexual blackmail operations. For more along these lines, read our concise summaries of news articles on government corruption and Jeffrey Epstein's child sex trafficking ring.
Late last month, some 14,000 baby chicks in Pennsylvania were shipped from a hatchery – commercial operations that breed chickens, incubate their eggs, and sell day-old chicks – to small farms across the country. But they didn't get far. They were reportedly abandoned in a US Postal Service truck in Delaware for three-and-a-half days without water, food, or temperature control. By the time officials arrived at the postal facility, 4,000 baby birds were already dead. More than 9 billion chickens raised for meat annually in the US are kept on factory farms – long, windowless buildings that look more like industrial warehouses than farms. Up to 6 percent die before they can even be trucked to the slaughterhouse. The average consumer, if they think about farm animal suffering at all, may only think about it in the context of factory farms or slaughterhouses. But the factory farm production chain is incredibly complex, and at each step, animals have little to no protections. That leads to tens of millions of animals dying painful deaths each year in transport alone, and virtually no companies are ever held accountable. These deaths are just as tragic as the thousands who died in the recent USPS incident, and they are just as preventable. The meat industry could choose to pack fewer animals into each truck, require heating and cooling during transport, and give animals ample time for rest, water, and food on long journeys. But such modest measures would cut into their margins.
Note: For more along these lines, read our concise summaries of news articles on factory farming and food system corruption.
The first White House report of the Make America Healthy Again (MAHA) Commission ... was published yesterday. The Commission is chaired by [Robert F.] Kennedy, now Secretary of the Department of Health and Human Services, and features other prominent administration officials including USDA Secretary Brooke Rollins, NIH Director Jay Bhattacharya, and FDA Commissioner Marty Makary. The report outlines the massive increase in youth health problems in the country that spends more per capita on healthcare than any nation in history. Many of these diseases are metabolic: obesity, diabetes, and Non-Alcoholic Fatty Liver Disease. Others involve the immune system, such as asthma, allergies, and autoimmune disorders. Still others are psychiatric, such as depression and anxiety. Perhaps the most baffling development is the massive spike in autism spectrum disorder. This once-rare condition reportedly affects one in 31 American children. The MAHA Commission focuses on four key drivers of such change: food, exposure to environmental chemicals, the pervasive use of technology and a corresponding decline in physical exercise, and the overuse of medication that sometimes creates more problems than it solves. The Commission's first report ... does not call for a ban on specific pesticides or vaccines. What it does manage, however, is to reframe the debate over public health and set a bold agenda to reform the system.
Note: For more along these lines, read our concise summaries of news articles on health and Big Pharma profiteering.
If there is one thing that Ilya Sutskever knows, it is the opportunities–and risks–that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. "We're definitely going to build a bunker before we release AGI," Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. "Of course, it's going to be optional whether you want to get into the bunker," he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.
Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.
In President Dwight D. Eisenhower's famous 1961 speech about the dangers of the military-industrial complex, he also cautioned Americans about the growing power of a "scientific, technological elite." "The prospect of domination of the nation's scholars by federal employment project allocations and the power of money is ever present," warned Eisenhower. And he was right. Today, many of the people protesting the Trump administration's cuts to federal funding for scientific research are part of that scientific, technological elite. But there's a good chance that slashing federal spending will liberate science from the corrupting forces that Eisenhower warned us about. Thomas Edison's industrial lab produced huge breakthroughs in telecommunications and electrification. Alexander Graham Bell's lab produced modern telephony and sound recording, all without government money. The Wright Brothers–who ran a bicycle shop before revolutionizing aviation–launched the first successfully manned airplane flight in December 1903, beating out more experienced competitors like Samuel Langley, secretary of the Smithsonian Institution, who had received a grant from the War Department for his research. Of course, government funding has led to major breakthroughs both during and after World War II. In an influential 2005 paper, Stanford University professor John Ioannidis flatly concluded that "most published research findings are false." He argued that the current peer review model encourages groupthink. "You end up with a monolithic view, and so you crush what's so important in science, which is different ideas competing in a marketplace of ideas."
Note: "Trust the science" sounds noble–until you realize that even top editors of world-renowned journals have warned that much of published medical research is unreliable, distorted by fraud, corporate influence, and conflicts of interest. For more along these lines, read about how the US government turns a blind eye to the corporations fueling America's health crisis.
Amber Scorah knows only too well that powerful stories can change society–and that powerful organizations will try to undermine those who tell them. While working at a media outlet that connects whistleblowers with journalists, she noticed parallels in the coercive tactics used by groups trying to suppress information. "There is a sort of playbook that powerful entities seem to use over and over again," she says. "You expose something about the powerful, they try to discredit you, people in your community may ostracize you." In September 2024, Scorah cofounded Psst, a nonprofit that helps people in the tech industry or the government share information of public interest with extra protections–with lots of options for specifying how the information gets used and how anonymous a person stays. Psst's main offering is a "digital safe"–which users access through an anonymous end-to-end encrypted text box hosted on Psst.org, where they can enter a description of their concerns. What makes Psst unique is something it calls its "information escrow" system–users have the option to keep their submission private until someone else shares similar concerns about the same company or organization. Combining reports from multiple sources defends against some of the isolating effects of whistleblowing and makes it harder for companies to write off a story as the grievance of a disgruntled employee, says Psst cofounder Jennifer Gibson.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
According to recent research by the Office of the eSafety Commissioner, "nearly 1 in 5 young people believe it's OK to track their partner whenever they want". Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you're not doing anything wrong, they're asking (well, demanding) that we trust them. But it's not about trust, it's about control and disciplining behaviour. "Nothing to hide; nothing to fear" is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what's considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Joan Doyle trusts her doctors. Between her husband's epilepsy and diabetes, her daughter's Down syndrome and her own car accident years ago, the 65-year-old Sharonville resident and her family have relied on a whole host of doctors to guide them through new diagnoses and prescriptions. So when she searched her family's doctors in Open Payments, a public database that shows which doctors have received money from Big Pharma, Doyle was curious about what she'd find. "Certainly none of my doctors are on this list," she remembered thinking before searching the database. She was surprised. "Every single one of them," Doyle said. "Everybody from our dentist to our family doctor to all of our ologists." All 12 of the doctors Doyle searched accepted payments or in-kind forms of compensation from pharma or medical device companies between 2017 and 2023. The total sum varied widely, from less than $300 for her OB-GYN to more than $150,000 for her husband's oncologist. Payments like these are pervasive: A 2024 analysis found that more than half of doctors in the U.S. accepted a payment from a pharmaceutical or medical device company over the past decade. Most don't earn millions of dollars ... but research shows that when a doctor was bought a single meal of less than $20 by a drug company, they were up to twice as likely to prescribe the medication the company was marketing.
Note: 60% of U.S. doctors who shaped the DSM-5-TR–the "bible" of psychiatric diagnosis–received $14.2 million from the drug industry, raising concerns over conflicts of interest in psychiatric guidelines. For more along these lines, read our concise summaries of news articles on health and Big Pharma profiteering.
The inaugural "AI Expo for National Competitiveness" [was] hosted by the Special Competitive Studies Project – better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir's booth titled Civilian Harm Mitigation. It was led by two "privacy and civil liberties engineers" [who] described how Palantir's Gaia map tool lets users "nominate targets of interest" for "the target nomination process". It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. "Let's say you're operating in a place with a lot of civilian areas, like Gaza," I asked the engineers afterward. "Does Palantir prevent you from â€nominating a target' in a civilian location?" Short answer, no.
Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.
Americans are becoming progressively sicker with chronic diseases, including cancer, cardiovascular disease, obesity, diabetes, immune disorders, and declining fertility. Six in 10 Americans suffer from at least one chronic disease, and four in 10 have two or more. The increase in incidence of chronic diseases to epidemic levels has occurred over the last 50 years in parallel with the dramatic increase in the production and use of human-made chemicals, most made from petroleum. These chemicals are used in household products, food, and food packaging. There is either no pre-market testing or limited, inappropriate testing for safety of chemicals such as artificial flavorings, dyes, emulsifiers, thickeners, preservatives, and other additives. Exposure is ubiquitous because chemicals that make their way into our food are frequently not identified, and thus cannot realistically be avoided. The result is that unavoidable toxic chemicals are contributing to chronic diseases. Critically, the FDA today does not require corporations to even inform them of many of the chemicals being added to our food, and corporations have been allowed to staff regulatory panels that determine whether the human-made chemicals they add to food and food packaging are safe. The FDA blatantly disregarded this abuse of federal conflict-of-interest standards, which resulted in thousands of untested chemicals being designated as "Generally Recognized As Safe" (GRAS).
Note: For more along these lines, read our concise summaries of news articles on toxic chemicals and food system corruption.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that "threaten our personal safety and undermine America's national security." The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information–often without individuals' knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn't control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel's use of its technology. And it would require close collaboration with the Israeli security establishment – including joint drills and intelligence sharing – that was unprecedented in Google's deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza – with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn't furnish weapons to the military, but it provides computing services that allow the military to function – its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
What goes through the minds of people working at porn companies profiting from videos of children being raped? Thanks to a filing error in a Federal District Court in Alabama, releasing thousands of pages of internal documents from Pornhub that were meant to be sealed, we now know. One internal document indicates that Pornhub as of May 2020 had 706,000 videos available on the site that had been flagged by users for depicting rape or assaults on children or for other problems. In the message traffic, one employee advises another not to copy a manager when they find sex videos with children. The other has the obvious response: "He doesn't want to know how much C.P. we have ignored for the past five years?" C.P. is short for child pornography. One private memo acknowledged that videos with apparent child sexual abuse had been viewed 684 million times before being removed. Pornhub produced these documents during discovery in a civil suit by an Alabama woman who beginning at age 16 was filmed engaging in sex acts, including at least once when she was drugged and then raped. These videos of her were posted on Pornhub and amassed thousands of views. One discovery memo showed that there were 155,447 videos on Pornhub with the keyword "12yo." Other categories that the company tracked were "11yo," "degraded teen," "under 10" and "extreme choking." (It has since removed these searches.) Google ... has been central to the business model of companies publishing nonconsensual imagery. Google also directs users to at least one website that monetizes assaults on victims of human trafficking.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.
In 2009, Pennsylvania's Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as "remote learning tools" and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children's digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student's data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today's kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
BlackRock Inc.'s annual proxy statement devotes more than 50 pages to executive pay. How many of those are useful in understanding why Chief Executive Officer Larry Fink was compensated to the tune of $37 million for 2024? Not enough. The asset manager's latest remuneration report has heightened significance because BlackRock's shareholders delivered a rare and large protest vote against its pay framework at last year's annual meeting. That followed recommendations ... to withhold support for the so-called say-on-pay motion. In the wake of the rebuke, a board committee responsible for pay and perks took to the phones and hit the road to hear shareholders' gripes. Investors wanted more explanation of how the committee members used their considerable discretion in arriving at awards. There was also an aversion to one-time bonuses absent tough conditions. Incentive pay is 50% tied to BlackRock's financial performance, with the remainder split equally between objectives for "business strength" and "organizational strength." That financial piece was previously described using a non-exhaustive list of seven financial metrics. Now there are eight, gathered under three priorities: "drive shareholder value creation," "accelerate organic revenue growth" and "enhance operating leverage." There's no weighting given to the three financial priorities. The pay committee says Fink "far exceeded" expectations, but those expectations weren't quantified.
Note: For more along these lines, read our concise summaries of news articles on financial industry corruption.
Surveillance capitalism came about when some crafty software engineers realized that advertisers were willing to pay bigtime for our personal data. The data trade is how social media platforms like Google, YouTube, and TikTok make their bones. In 2022, the data industry raked in just north of $274 billion worth of revenue. By 2030, it's expected to explode to just under $700 billion. Targeted ads on social media are made possible by analyzing four key metrics: your personal info, like gender and age; your interests, like the music you listen to or the comedians you follow; your "off app" behavior, like what websites you browse after watching a YouTube video; and your "psychographics," meaning general trends glossed from your behavior over time, like your social values and lifestyle habits. In 2017 The Australian alleged that [Facebook] had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure." The social media company likewise tracked when adolescent girls deleted selfies, "so it can serve a beauty ad to them at that moment," according to [former employee Sarah] Wynn-Williams. Other examples of Facebook's ad lechery are said to include the targeting of young mothers based on their emotional state, as well as emotional indexes mapped to racial groups.
Note: Facebook hid its own internal research for years showing that Instagram worsened body image issues, revealing that 13% of British teenage girls reported more frequent suicidal thoughts after using the app. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
In recent years, Israeli security officials have boasted of a "ChatGPT-like" arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas's bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military's policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered "Catch and Revoke" initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to "overthrow or replace the culture on which our constitutional Republic stands."
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."
Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.
Private equity firms claim their investments in U.S. health care modernize operations and improve efficiency, helping to rescue failing healthcare systems and support practitioners. But recent studies build on mounting evidence that suggests these for-profit deals lead to more patient deaths and complications, among other adverse health outcomes. Recent studies show private equity (PE) ownership across a wide range of medical sectors leads to: Poorer medical outcomes, including increased deaths, higher rates of complications, more hospital-acquired infections, and higher readmission rates; Staffing problems, with frequent turnover and cuts to nursing staff or experienced physicians that can lead to shorter clinical visits and longer wait times, misdiagnoses, unnecessary care, and treatment delays; Less access to care and higher prices, including the withdrawal of health care providers from rural and low-income areas, and the closure of unprofitable but essential services such as labor and delivery, psychiatric care, and trauma units. Economist Atul Gupta showed in 2021 that private equity acquisitions of U.S. nursing homes over a 12-year period increased deaths among residents by 10%–the equivalent of an additional 20,150 lives lost. Patients treated at PE-owned facilities, whose numbers have skyrocketed, continue to experience worse or mixed outcomes–from higher mortality rates to lower satisfaction–compared to those treated elsewhere.
Note: BlackRock and Vanguard manage over $11 trillion and $8 trillion respectively–an unprecedented concentration of financial power. We hear outrage about billionaires and oligarchs, but rarely about private equity firms, who are backed by both political parties and are drastically reshaping our economy, contributing to environmental destruction, and extracting wealth from communities in the US and all over the world. For more along these lines, read our concise summaries of news articles on health and financial industry corruption.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.

