News StoriesExcerpts of Key News Stories in Major Media
Note: This comprehensive list of news stories is usually updated once a week. Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
The bedrock of Google's empire sustained a major blow on Monday after a judge found its search and ad businesses violated antitrust law. The ruling, made by the District of Columbia's Judge Amit Mehta, sided with the US Justice Department and a group of states in a set of cases alleging the tech giant abused its dominance in online search. "Google is a monopolist, and it has acted as one to maintain its monopoly," Mehta wrote in his ruling. The findings, if upheld, could outlaw contracts that for years all but assured Google's dominance. Judge Mehta ruled that Google violated antitrust law in the markets for "general search" and "general search text" ads, which are the ads that appear at the top of the search results page. Apple, Amazon, and Meta are defending themselves against a series of other federal- and state-led antitrust suits, some of which make similar claims. Google's disputed behavior revolved around contracts it entered into with manufacturers of computer devices and mobile devices, as well as with browser services, browser developers, and wireless carriers. These contracts, the government claimed, violated antitrust laws because they made Google the mandatory default search provider. Companies that entered into those exclusive contracts have included Apple, LG, Samsung, AT&T, T-Mobile, Verizon, and Mozilla. Those deals are why smartphones ... come preloaded with Google's various apps.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business – like having happy customers and stable products – and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
My insurance broker left a frantic voicemail telling me that my homeowner's insurance had lapsed. When I finally reached my insurance broker, he told me the reason Travelers revoked my policy: AI-powered drone surveillance. My finances were imperiled, it seemed, by a bad piece of code. As my broker revealed, the ominous threat that canceled my insurance was nothing more than moss. Travelers not only uses aerial photography and AI to monitor its customers' roofs, but also wrote patents on the technology – nearly 50 patents actually. And it may not be the only insurer spying from the skies. No one can use AI to know the future; you're training the technology to make guesses based on changes in roof color and grainy aerial images. But even the best AI models will get a lot of predictions wrong, especially at scale and particularly where you're trying to make guesses about the future of radically different roof designs across countless buildings in various environments. For the insurance companies designing the algorithms, that means a lot of questions about when to put a thumb on the scale in favor of, or against, the homeowner. And insurance companies will have huge incentives to choose against the homeowner every time. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we're all at risk.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
This week, images were beamed back to Earth of China's flag unfurled on the Moon. It's the country's fourth landing there. In the past 12 months, India and Japan have also set down spacecraft on the lunar surface. In February, US firm Intuitive Machines became the first private company to put a lander on the Moon. Meanwhile, Nasa wants to send humans back to the Moon, with its Artemis astronauts aiming for a 2026 landing. China says it will send humans to the Moon by 2030. And instead of fleeting visits, the plan is to build permanent bases. A UN agreement from 1967 says no nation can own the Moon. Instead, the fantastically named Outer Space Treaty says it belongs to everyone, and that any exploration has to be carried out for the benefit of all humankind and in the interests of all nations. While the lunar terrain looks rather barren, it contains minerals, including rare earths, metals like iron and titanium - and helium too, which is used in everything from superconductors to medical equipment. Estimates for the value of all this vary wildly, from billions to quadrillions. So it's easy to see why some see the Moon as a place to make lots of money. In 1979, an international treaty declared that no state or organisation could claim to own the resources there. Only 17 countries are party to it, and this does not include any countries who've been to the Moon. The US passed a law in 2015 allowing its citizens and industries to extract, use and sell any space material.
Note: Along with a rush to mine minerals from the moon, a new arms race in space is starting, led by private companies like SpaceX.
Biomimicry is grounded in the concept that nature, with its 3.8 billion years of evolution, has already solved many of the problems we grapple with today. Animals, plants, and microbes are the consummate engineers. Rather than designing new technology from scratch, scientists and engineers often look to nature for inspiration. In its approach, biomimicry involves three essential aspects: Emulating natural forms, mimicking natural processes, and imitating ecosystems. Nature operates under specific principles: it runs on sunlight, uses only the energy it needs, fits form to function, recycles everything, rewards cooperation, banks on diversity, demands local expertise, curbs excesses from within, and taps the power of limits. By aligning our technologies and practices with these principles, we can create not only innovative but also sustainable solutions to our challenges. Architects and engineers are designing buildings that mimic termite mounds, which maintain constant temperature despite external fluctuations. This biomimetic approach reduces energy consumption for heating and cooling. The self-cleaning properties of lotus leaves have inspired a whole range of products. The microscopic structure of a lotus leaf repels water and dirt particles. Today, we have self-cleaning paints and fabrics based on this principle. The bumps on the fins of humpback whales, called tubercles, increase their efficiency in water. Applying this concept to the design of wind turbines has led to blades that produce power more efficiently. Corals can heal themselves by producing an organic mineral in response to physical damage. Scientists have mimicked this process to create a type of concrete that can "heal" its own cracks.
Note: Explore more positive stories like this on healing the Earth and technology for good.
The average American today spends nearly 90 percent of their time indoors. Yet research indicates that children benefit greatly from time spent in nature; that not only does it improve their cognition, mood, self-esteem and social skills, but it can also make them physically healthier and less anxious. "Outdoor time for children is beneficial not just for physical health but also mental health for a multitude of reasons," says Janine Domingues, a senior psychologist in the Anxiety Disorders Center at the Child Mind Institute. "It fosters curiosity and independence. It helps kids get creative about what they can do … and then just moving around and expending energy has a lot of physical health benefits." [A] 2022 systematic review found that time outdoors can improve prosocial behaviors, including sharing, cooperating and comforting others. Research has found that nature can be particularly helpful for those who've had adverse childhood experiences. Such experiences can include growing up with poverty, abuse or violence. One 2023 study published in the Journal of Environmental Psychology looked at how making art in nature affected about 100 children in a low-income neighborhood in England. Their confidence, self-esteem and agency all improved. For all these reasons, it's important for even very young children to have access to nature where they already are, says Nilda Cosco, a research professor.
Note: Explore more positive stories like this about reimagining education.
It sounds like a scene from a Spielberg film: an injured worker undergoes an emergency amputation, performed by one of her colleagues, allowing her to live another day. But this is not a human story – it is behaviour seen in ants. While it is not the first time wound care has been seen in ants, scientists say their discovery is the first example of a non-human animal carrying out life-saving amputations. Surprisingly, the insects appear to tailor the treatment they give to the location of injury. "The ants are able to diagnose, to some extent, the wounds and treat them accordingly to maximise the survival of the injured," said Dr Erik Frank, from the University of Lausanne. Writing in the journal Current Biology, Frank and colleagues report how they cut Florida carpenter ants (Camponotus floridanus) on their right hind limb, then observed the responses of their nest mates for a week. "Nest mates would begin licking the wound before moving up the injured limb with their mouthparts until they reached the trochanter. The nest mates then proceeded to repeatedly bite the injured leg until it was cut off," the team wrote. By contrast, no amputations were observed for the nine ants with injuries on their tibia, or lower leg. Instead, these ants received only wound care from their nest mates in the form of licking. "It is another example of an adaptation in the lives of social insect workers in which workers help each other to work for their colony and to help their colony," [Prof Francis Ratnieks at the University of Sussex] said. "Such as when a worker honeybee makes a waggle dance to direct a nest mate to food, or when a worker sacrifices its life in defence of the colony, or here where workers amputate the limbs of an injured or infected worker."
Note: Explore more positive stories like this about animal wonders.
The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use "to counter misinformation online" and "advance state-of-the-art misinformation research." House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million ... to develop AI tools that could censor information far faster and at a much greater scale than human beings. The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the "responsibility of censorship" on social media. The release of [an] interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action. House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable. "AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of 'disinformation' bureaucrats and researchers," House investigators wrote in the interim report.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Once upon a time ... Google was truly great. A couple of lads at Stanford University in California had the idea to build a search engine that would crawl the world wide web, create an index of all the sites on it and rank them by the number of inbound links each had from other sites. The arrival of ChatGPT and its ilk ... disrupts search behaviour. Google's mission – "to organise the world's information and make it universally accessible" – looks like a much more formidable task in a world in which AI can generate infinite amounts of humanlike content. Vincent Schmalbach, a respected search engine optimisation (SEO) expert, thinks that Google has decided that it can no longer aspire to index all the world's information. That mission has been abandoned. "Google is no longer trying to index the entire web," writes Schmalbach. "In fact, it's become extremely selective, refusing to index most content. This isn't about content creators failing to meet some arbitrary standard of quality. Rather, it's a fundamental change in how Google approaches its role as a search engine." The default setting from now on will be not to index content unless it is genuinely unique, authoritative and has "brand recognition". "They might index content they perceive as truly unique," says Schmalbach. "But if you write about a topic that Google considers even remotely addressed elsewhere, they likely won't index it. This can happen even if you're a well-respected writer with a substantial readership."
Note: WantToKnow.info and other independent media websites are disappearing from Google search results because of this. For more along these lines, see concise summaries of deeply revealing news articles on AI and censorship from reliable sources.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world's most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored "related follow-up questions." OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers' websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers' content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It's all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Amazon has been accused of using "intrusive algorithms" as part of a sweeping surveillance program to monitor and deter union organizing activities. Workers at a warehouse run by the technology giant on the outskirts of St Louis, Missouri, are today filing an unfair labor practice charge with the National Labor Relations Board (NLRB). A copy of the charge ... alleges that Amazon has "maintained intrusive algorithms and other workplace controls and surveillance which interfere with Section 7 rights of employees to engage in protected concerted activity". There have been several reports of Amazon surveilling workers over union organizing and activism, including human resources monitoring employee message boards, software to track union threats and job listings for intelligence analysts to monitor "labor organizing threats". Artificial intelligence can be used by warehouse employers like Amazon "to essentially have 24/7 unregulated and algorithmically processed and recorded video, and often audio data of what their workers are doing all the time", said Seema N Patel ... at Stanford Law School. "It enables employers to control, record, monitor and use that data to discipline hundreds of thousands of workers in a way that no human manager or group of managers could even do." The National Labor Relations Board issued a memo in 2022 announcing its intent to protect workers from AI-enabled monitoring of labor organizing activities.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Meredith Whittaker practises what she preaches. As the president of the Signal Foundation, she's a strident voice backing privacy for all. In 2018, she burst into public view as one of the organisers of the Google walkouts, mobilising 20,000 employees of the search giant in a twin protest over the company's support for state surveillance and failings over sexual misconduct. The Signal Foundation ... exists to "protect free expression and enable secure global communication through open source privacy technology". The criticisms of encrypted communications are as old as the technology: allowing anyone to speak without the state being able to tap into their conversations is a godsend for criminals, terrorists and paedophiles around the world. But, Whittaker argues, few of Signal's loudest critics seem to be consistent in what they care about. "If we really cared about helping children, why are the UK's schools crumbling? Why was social services funded at only 7% of the amount that was suggested to fully resource the agencies that are on the frontlines of stopping abuse? Signal either works for everyone or it works for no one. Every military in the world uses Signal, every politician I'm aware of uses Signal. Every CEO I know uses Signal because anyone who has anything truly confidential to communicate recognises that storing that on a Meta database or in the clear on some Google server is not good practice."