Privacy News ArticlesExcerpts of key news articles on
Below are key excerpts of revealing news articles on privacy and mass surveillance issues from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to revealing excerpts of key major media news articles on dozens of engaging topics. And read excerpts from 20 of the most revealing news articles ever published.
Human rights activists, journalists and lawyers across the world have been targeted by authoritarian governments using hacking software sold by the Israeli surveillance company NSO Group, according to an investigation into a massive data leak. The investigation by the Guardian and 16 other media organisations suggests widespread and continuing abuse of NSO's hacking spyware, Pegasus. Pegasus is a malware that infects iPhones and Android devices to enable operators of the tool to extract messages, photos and emails, record calls and secretly activate microphones. The leak contains a list of more than 50,000 phone numbers that, it is believed, have been identified as those of people of interest by clients of NSO since 2016. The numbers of more than 180 journalists are listed in the data, including reporters, editors and executives at the Financial Times, CNN, the New York Times, France 24, the Economist, Associated Press and Reuters. The phone number of a freelance Mexican reporter, Cecilio Pineda Birto, was found in the list, apparently of interest to a Mexican client in the weeks leading up to his murder, when his killers were able to locate him at a carwash. He was among at least 25 Mexican journalists apparently selected as candidates for surveillance. The broad array of numbers in the list belonging to people who seemingly have no connection to criminality suggests some NSO clients are breaching their contracts with the company, spying on pro-democracy activists and journalists.
Note: Read more about how NSO Group spyware was used against journalists and activists by the Mexican government. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and the disappearance of privacy from reliable major media sources.
The Covid-19 pandemic is now giving Russian authorities an opportunity to test new powers and technology, and the country's privacy and free-speech advocates worry the government is building sweeping new surveillance capabilities. Perhaps the most well-publicized tech tool in Russia's arsenal for fighting coronavirus is Moscow's massive facial-recognition system. Rolled out earlier this year, the surveillance system had originally prompted an unusual public backlash, with privacy advocates filing lawsuits over unlawful surveillance. Coronavirus, however, has given an unexpected public-relations boost to the system. Last week, Moscow police claimed to have caught and fined 200 people who violated quarantine and self-isolation using facial recognition and a 170,000-camera system. Some of the alleged violators who were fined had been outside for less than half a minute before they were picked up by a camera. And then there's the use of geolocation to track coronavirus carriers. Prime Minister Mikhail Mishustin earlier this week ordered Russia's Ministry of Communications to roll out a tracking system based on "the geolocation data from the mobile providers for a specific person" by the end of this week. According to a description in the government decree, information gathered under the tracking system will be used to send texts to those who have come into contact with a coronavirus carrier, and to notify regional authorities so they can put individuals into quarantine.
Note: For more along these lines, see concise summaries of deeply revealing news articles on the coronavirus pandemic and the disappearance of privacy from reliable major media sources.
Like the 9/11 terrorist attacks in the U.S., the coronavirus pandemic is a crisis of such magnitude that it threatens to change the world in which we live, with ramifications for how leaders govern. Governments are locking down cities with the help of the army, mapping population flows via smartphones and jailing or sequestering quarantine breakers using banks of CCTV and facial recognition cameras backed by artificial intelligence. The restrictions are unprecedented in peacetime and made possible only by rapid advances in technology. And while citizens across the globe may be willing to sacrifice civil liberties temporarily, history shows that emergency powers can be hard to relinquish. “A primary concern is that if the public gives governments new surveillance powers to contain Covid-19, then governments will keep these powers after the public health crisis ends,” said Adam Schwartz ... at the non-profit Electronic Frontier Foundation. “Nearly two decades after the 9/11 attacks, the U.S. government still uses many of the surveillance technologies it developed in the immediate wake.” In part, the Chinese Communist Party’s containment measures at the virus epicenter in Wuhan set the tone, with what initially seemed shocking steps to isolate the infected being subsequently adopted in countries with no comparable history of China’s state controls. For Gu Su ... at Nanjing University, China’s political culture “made its people more amenable to the draconian measures.”
Note: For more along these lines, see concise summaries of deeply revealing news articles on the coronavirus and the disappearance of privacy from reliable major media sources.
Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously. His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants. Without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year. The computer code underlying its app ... includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew. And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes. Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see.
Note: For lots more on this disturbing new technology, read one writer's personal experience with it. For more along these lines, see concise summaries of deeply revealing news articles on the disappearance of privacy from reliable major media sources.
A new generation of technology such as the Beware software being used in Fresno has given local law enforcement officers unprecedented power to peer into the lives of citizens. But the powerful systems also have become flash points for civil libertarians and activists, who say they represent a troubling intrusion on privacy, have been deployed with little public oversight and have potential for abuse or error. “This is something that’s been building since September 11,” said Jennifer Lynch ... at the Electronic Frontier Foundation. “First funding went to the military to develop this technology, and now it has come back to domestic law enforcement. It’s the perfect storm of cheaper and easier-to-use technologies and money from state and federal governments to purchase it.” Perhaps the most controversial and revealing technology is the threat-scoring software Beware. Fresno is one of the first departments in the nation to test the program. As officers respond to calls, Beware automatically runs the address. The searches return the names of residents and scans them against a range of publicly available data to generate a color-coded threat level for each person or address: green, yellow or red. Exactly how Beware calculates threat scores is something that its maker, Intrado, considers a trade secret, so it is unclear how much weight is given to a misdemeanor, felony or threatening comment on Facebook. The fact that only Intrado — not the police or the public — knows how Beware tallies its scores is disconcerting.
Note: Learn more in this informative article. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
During New York Gov. Andrew Cuomo’s daily coronavirus briefing on Wednesday, the somber grimace that has filled our screens for weeks was briefly replaced by something resembling a smile. The inspiration ... was a video visit from former Google CEO Eric Schmidt, who joined the governor’s briefing to announce that he will be heading up a blue-ribbon commission to reimagine New York state’s post-Covid reality, with an emphasis on permanently integrating technology into every aspect of civic life. Just one day earlier, Cuomo had announced a similar partnership with the Bill and Melinda Gates Foundation to develop “a smarter education system.” It has taken some time to gel, but something resembling a coherent Pandemic Shock Doctrine is beginning to emerge. Call it the “Screen New Deal.” Far more high-tech than anything we have seen during previous disasters, the future that is being rushed into being as the bodies still pile up treats our past weeks of physical isolation not as a painful necessity to save lives, but as a living laboratory for a permanent — and highly profitable — no-touch future. This is a future in which, for the privileged, almost everything is home delivered, either virtually via streaming and cloud technology, or physically via driverless vehicle or drone, then screen “shared” on a mediated platform. It’s a future in which our every move, our every word, our every relationship is trackable, traceable, and data-mineable by unprecedented collaborations between government and tech giants.
Note: For more along these lines, see concise summaries of deeply revealing news articles on the coronavirus pandemic and the disappearance of privacy from reliable major media sources.
On Sept. 7, 2017, the world heard an alarming announcement from credit ratings giant Equifax: In a brazen cyberattack, somebody had stolen sensitive personal information from more than 140 million people, nearly half the population of the U.S. The information included Social Security numbers, driver's license numbers, information from credit disputes and other personal details. Then, something unusual happened. The data disappeared. Completely. CNBC talked to eight experts. All of them agreed that a breach happened, and personal information from 143 million people was stolen. But none of them knows where the data is now. Security experts haven't seen the data used in any of the ways they'd expect in a theft like this — not for impersonating victims, not for accessing other websites, nothing. Most experts familiar with the case now believe that the thieves were working for a foreign government and are using the information not for financial gain, but to try to identify and recruit spies. One former senior intelligence official ... summarized the prevailing expert opinion on how the foreign intelligence agency is using the data. First, he said, the foreign government is probably combining this information with other stolen data, then analyzing it using artificial intelligence or machine learning to figure out who's likely to be — or to become — a spy for the U.S. government. Second, credit reporting data provides compromising information that can be used to turn valuable people into agents of a foreign government.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and the disappearance of privacy.
Even before he became director of the FBI, [J. Edgar] Hoover was conducting secret intelligence operations against U.S. citizens he suspected were anarchists, radical leftists or communists. After a series of anarchist bombings went off across the United States in 1919, Hoover sent five agents to infiltrate the newly formed Communist Party. "From that day forward, he planned a nationwide dragnet of mass arrests to round up subversives, round up communists, round up Russian aliens," [author Tim] Weiner says. On Jan. 1, 1920, Hoover sent out the arrest orders, and at least 6,000 people were arrested and detained throughout the country. "When the dust cleared, maybe 1 in 10 was found guilty of a deportable offense," says Weiner. Hoover, Attorney General Mitchell Palmer and Secretary of the Navy Franklin Delano Roosevelt all came under attack for their role in the raids. Hoover started amassing secret intelligence on "enemies of the United States" – a list that included terrorists, communists, spies – or anyone Hoover or the FBI had deemed subversive. Later on, anti-war protesters and civil rights leaders were added to Hoover's list. "Hoover saw the civil rights movement from the 1950s onward and the anti-war movement from the 1960s onward, as presenting the greatest threats to the stability of the American government since the Civil War," [Weiner] says. "These people were enemies of the state, and in particular Martin Luther King [Jr.] was an enemy of the state."
Note: Read more about the FBI's COINTELPRO program. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and the erosion of civil liberties from reliable major media sources.
China's ambition to collect a staggering amount of personal data from everyday citizens is more expansive than previously known. Phone-tracking devices are now everywhere. The police are creating some of the largest DNA databases in the world. And the authorities are building upon facial recognition technology to collect voice prints from the general public. The Times' Visual Investigations team and reporters in Asia spent over a year analyzing more than a hundred thousand government bidding documents. The Chinese government's goal is clear: designing a system to maximize what the state can find out about a person's identity, activities and social connections. In a number of the bidding documents, the police said that they wanted to place cameras where people go to fulfill their common needs – like eating, traveling, shopping and entertainment. The police also wanted to install facial recognition cameras inside private spaces, like residential buildings, karaoke lounges and hotels. Authorities are using phone trackers to link people's digital lives to their physical movements. Devices known as WiFi sniffers and IMSI catchers can glean information from phones in their vicinity. DNA, iris scan samples and voice prints are being collected indiscriminately from people with no connection to crime. The government wants to connect all of these data points to build comprehensive profiles for citizens – which are accessible throughout the government.
Note: For more on this disturbing topic, see the New York Times article "How China is Policing the Future." For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Even if you have never set foot in China, Hikvision's cameras have likely seen you. By 2017, Hikvision had captured 12 percent of the North American market. Its cameras watched over apartment buildings in New York City, public recreation centers in Philadelphia, and hotels in Los Angeles. Police departments used them to monitor streets in Memphis, Tennessee, and in Lawrence, Massachusetts. London and more than half of Britain's 20 next-largest cities have deployed them. A recent search for the company's cameras, using Shodan, a tool that locates internet-connected devices, yielded nearly 5 million results, including more than 750,000 devices in the United States. Among the policies that Hikvision's products have supported is China's wide-ranging crackdown against the predominantly Muslim Uyghurs and other minority groups in the western province of Xinjiang. Far from being appalled by Hikvision's role in China's atrocities, however, plenty of foreign leaders are intrigued. They see an opportunity to acquire tools that could reduce crime and spur growth. Of course, the authoritarian-leaning among them also see a chance to monitor their domestic challengers and cement their control. The use of military language ... heightens the sense that these tools can easily become weapons. Cameras can be set to "patrol." "Intrusion detection" sounds like a method for defending a bank or a military base. Hikvision's cameras do not check identities. They "capture" faces.
Note: For more, see this Bloomberg article titled "Blacklisted Chinese Tech Found Inside Top Secret UK Lab." For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
A video showing a mobile device snapping infrared images of an iPhone user is circulating around the internet. In the Tik Tok shared by user Brie Thomason, a digital camera using an infrared lens is seen filming an iPhone user observing their home screen. As the iPhone user stares blatantly at the device, Thomason's digital camera captures the iPhone snapping multiple infrared images every 5-10 seconds. While this discovery may cause some users to panic, Apple claims this is actually just an aspect of the iPhone that allows users to control their face ID and Animoji (the animated emoji function). According to Apple, this feature was first debuted as the iPhone X's most groundbreaking function; since it is not even discernible at first glance, even though it literally stares you in the face. The company calls this feature: the new TrueDepth IR camera. This camera, housed in the black notch at the top of the display, includes a number of high-tech components such as a "flood illuminator," infrared (IR) camera, and an infrared emitter. Officials say as an iPhone is used, the latter emits 30,000 infrared dots in a known pattern when a face is detected, enabling the iPhone X to generate a 3D map of a user's face. According to the team, this TrueDepth IR camera can also do this fast enough to support the creation of 3D motion data as well. So, yes, your iPhone is essentially taking "invisible" photos of you, but not for the reasons you would think.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
Netflix’s brilliant new 90-minute docu-drama, The Social Dilemma ... might be the most important watch of recent years. The film, which debuted at Sundance Film Festival in January, takes a premise that’s unlikely to set the world alight ... ie that Facebook, Twitter, Instagram et al aren’t exactly creating a utopia. Its masterstroke is in recruiting the very Silicon Valley insiders that built these platforms to explain their terrifying pitfalls – which they’ve realised belatedly. You don’t get a much clearer statement of social media’s dangers than an ex-Facebook executive’s claim that: “In the shortest time horizon I’m most worried about civil war.” The commonly held belief that social media companies sell users’ data is quickly cast aside – the data is actually used to create a sophisticated psychological profile of you. What they’re selling is their ability to manipulate you, or as one interviewee puts it: “It’s the gradual, slight, imperceptible change in your own behaviour and perception. It’s the only thing for them to make money from: changing what you do, how you think, who you are.” Despite it being public knowledge that Vote Leave and Trump’s 2016 election campaign harvested voters’ Facebook data on a gigantic scale, The Social Dilemma still manages to find fresh and vital tales of how these platforms destabilise modern politics. Russia’s Facebook hack to influence the 2016 US election? “The Russians didn’t hack Facebook. They used the tools that Facebook made for legitimate advertisers,” laments one of the company’s ex-investors.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and media manipulation from reliable sources.
A secret British spy unit created to mount cyber attacks on Britain’s enemies has waged war on the hacktivists of Anonymous and LulzSec, according to documents taken from the National Security Agency by Edward Snowden and obtained by NBC News. The blunt instrument the spy unit used to target hackers, however, also interrupted the web communications of political dissidents who did not engage in any illegal hacking. It may also have shut down websites with no connection to Anonymous. A division of Government Communications Headquarters (GCHQ), the British counterpart of the NSA, shut down communications among Anonymous hacktivists by launching a “denial of service” (DDOS) attack – the same technique hackers use to take down bank, retail and government websites – making the British government the first Western government known to have conducted such an attack. The documents ... show that the unit known as the Joint Threat Research Intelligence Group, or JTRIG, boasted of using the DDOS attack – which it dubbed Rolling Thunder - and other techniques to scare away 80 percent of the users of Anonymous internet chat rooms. Among the methods listed in the document were jamming phones, computers and email accounts and masquerading as an enemy in a "false flag" operation. A British hacktivist known as T-Flow, who was prosecuted for hacking, [said] no evidence of how his identity was discovered ever appeared in court documents.
Note: For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption from reliable major media sources.
In the middle of night, students at Utah's Kings Peak high school are wide awake – taking mandatory exams. Their every movement is captured on their computer's webcam and scrutinized by Proctorio, a surveillance company that uses artificial intelligence. Proctorio software conducts "desk scans" in an effort to catch test-takers who turn to "unauthorized resources", "face detection" technology to ensure there isn't anybody else in the room to help and "gaze detection" to spot anybody "looking away from the screen for an extended period of time". Proctorio then provides visual and audio records to Kings Peak teachers with the algorithm calling particular attention to pupils whose behaviors during the test flagged them as possibly engaging in academic dishonesty. Such remote proctoring tools grew exponentially during the pandemic, particularly at US colleges and universities. K-12 schools' use of remote proctoring tools, however, has largely gone under the radar. K-12 schools nationwide – and online-only programs in particular – continue to use tools from digital proctoring companies on students ... as young as kindergarten-aged. Civil rights activists, who contend AI proctoring tools fail to work as intended, harbor biases and run afoul of students' constitutional protections, said the privacy and security concerns are particularly salient for young children and teens, who may not be fully aware of the monitoring or its implications. One 2021 study found that Proctorio failed to detect test-takers who had been instructed to cheat. Researchers concluded the software was "best compared to taking a placebo: it has some positive influence, not because it works but because people believe that it works, or that it might work."
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
Police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list. Parabon NanoLabs ... alleges it can create an image of the suspect's face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes–particularly from DNA samples–is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software. Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. This ... threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face. These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice.
Note: Law enforcement officers in many U.S. states are not required to reveal that they used face recognition technology to identify suspects. For more along these lines, see concise summaries of important news articles on police corruption and the erosion of civil liberties from reliable major media sources.
In 2015, the journalist Steven Levy interviewed Elon Musk and Sam Altman, two founders of OpenAI. A galaxy of Silicon Valley heavyweights, fearful of the potential consequences of AI, created the company as a non-profit-making charitable trust with the aim of developing technology in an ethical fashion to benefit "humanity as a whole". Musk, who stepped down from OpenAI's board six years ago ... is now suing his former company for breach of contract for having put profits ahead of the public good and failing to develop AI "for the benefit of humanity". In 2019, OpenAI created a for-profit subsidiary to raise money from investors, notably Microsoft. When it released ChatGPT in 2022, the model's inner workings were kept hidden. It was necessary to be less open, Ilya Sutskever, another of OpenAI's founders and at the time the company's chief scientist, claimed in response to criticism, to prevent those with malevolent intent from using it "to cause a great deal of harm". Fear of the technology has become the cover for creating a shield from scrutiny. The problems that AI poses are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines may one day exercise power over humans but that they already work in ways that reinforce inequalities and injustices, providing tools by which those in power can consolidate their authority.
Note: Read more about the dangers of AI in the hands of the powerful. For more along these lines, see concise summaries of deeply revealing news articles on media manipulation and the disappearance of privacy from reliable sources.
In 2018 ... the government was buying up reams of consumer data – information scraped from cellphones, social media profiles, internet ad exchanges and other open sources – and deploying it for often-clandestine purposes like law enforcement and national security in the U.S. and abroad. The places you go, the websites you visit, the opinions you post – all collected and legally sold to federal agencies. The data is used in a wide variety of law enforcement, public safety, military and intelligence missions, depending on which agency is doing the acquiring. We've seen it used for everything from rounding up undocumented immigrants or detecting border tunnels. We've also seen data used for man hunting or identifying specific people in the vicinity of crimes or known criminal activity. And generally speaking, it's often used to identify patterns. It's often used to look for outliers or things that don't belong. So say you have a military facility, you could look for devices that appear suspicious that are lingering near that facility. Did you know that your car tires actually broadcast a wireless signal to the central computer of your car, telling it what the tire pressure is? It's there for perfectly legitimate safety reasons. Governments have ... figured out that the car tire is a proxy for the car. And if you just put little sensors somewhere or you run the right code on devices that you scatter around the world, then you can kind of track people with car tires.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
The annual "Trouble in Toyland" report, produced by the U.S. Public Interest Research Group (PIRG) and released before the holiday season, historically has focused on safety hazards found in traditional children's toys. According to the 38th annual "Trouble in Toyland" report, released in mid-November, "Toys that spy on children are a growing threat." The threats "stem from toys with microphones, cameras and trackers, as well as recalled toys, water beads, counterfeits and Meta Quest VR headsets." "The riskiest features of smart toys are those that can collect information, especially without our knowledge or used in a way that parents didn't agree to," said Teresa Murray, Consumer Watchdog at the U.S. PIRG Education Fund and author of the report. "It's chilling to learn what some of these toys can do," Murray said. Smart toys include "stuffed animals that listen and talk, devices that learn their habits, games with online accounts, and smart speakers, watches, play kitchens and remote cars that connect to apps or other technology," according to PIRG. Smart toys can pose the risk of data breaches, hacking, potential violations of children's privacy laws such as the Children's Online Privacy Protection Act of 1998 (COPPA), and exposure to "inappropriate or harmful material without proper filtering and parental controls." According to PIRG, "We don't know with certainty when our child plays with a connected toy that the company isn't recording us or collecting our data."
Note: A 2015 New York Times article called smart objects a "trainwreck in privacy and security." For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
Google maintains one of the world's most comprehensive repositories of location information. Drawing from phones' GPS coordinates, plus connections to Wi-Fi networks and cellular towers, it can often estimate a person's whereabouts to within several feet. It gathers this information in part to sell advertising, but police routinely dip into the data to further their investigations. The use of search data is less common, but that, too, has made its way into police stations throughout the country. Traditionally, American law enforcement obtains a warrant to search the home or belongings of a specific person, in keeping with a constitutional ban on unreasonable searches and seizures. Warrants for Google's location and search data are, in some ways, the inverse of that process, says Michael Price, the litigation director for the National Association of Criminal Defense Lawyers' Fourth Amendment Center. Rather than naming a suspect, law enforcement identifies basic parameters–a set of geographic coordinates or search terms–and asks Google to provide hits, essentially generating a list of leads. By their very nature, these Google warrants often return information on people who haven't been suspected of a crime. In 2018 a man in Arizona was wrongly arrested for murder based on Google location data. Google says it received a record 60,472 search warrants in the US last year, more than double the number from 2019. The company provides at least some information in about 80% of cases.
Note: For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
Palantir's founding team, led by investor Peter Thiel and Alex Karp, wanted to create a company capable of using new data integration and data analytics technology – some of it developed to fight online payments fraud – to solve problems of law enforcement, national security, military tactics, and warfare. Palantir, founded in 2003, developed its tools fighting terrorism after September 11, and has done extensive work for government agencies and corporations though much of its work is secret. Palantir's MetaConstellation platform allows the user to task ... satellites to answer a specific query. Imagine you want to know what is happening in a certain location and time in the Arctic. Click on a button and MetaConstelation will schedule the right combination of satellites to survey the designated area. The platform is able to integrate data from multiple and disparate sources – think satellites, drones, and open-source intelligence – while allowing a new level of decentralised decision-making. Just as a deep learning algorithm knows how to recognise a picture of a dog after some hours of supervised learning, the Palantir algorithms can become extraordinarily apt at identifying an enemy command and control centre. Alex Karp, Palantir's CEO, has argued that "the power of advanced algorithmic warfare systems is now so great that it equates to having tactical nuclear weapons against an adversary with only conventional ones."
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption and the disappearance of privacy from reliable major media sources.
Important Note: Explore our full index to revealing excerpts of key major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.