Privacy News ArticlesExcerpts of key news articles on
Below are key excerpts of revealing news articles on privacy and mass surveillance issues from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to revealing excerpts of key major media news articles on dozens of engaging topics. And read excerpts from 20 of the most revealing news articles ever published.
I participated in an online forum called US CBDC–A Disaster in the making? We had a very productive discussion about the policy aspect of central bank digital currencies (CBDCs). I believe that the Fed should not launch a CBDC. Ever. And I think that Congress should amend the Federal Reserve Act, just to be on the safe side. I want to distinguish between a wholesale CBDC and retail CBDC. With a wholesale CBDC, banks can electronically transact with each other using a liability of the central bank. That is essentially what banks do now. But retail CBDCs are another animal altogether. Retail CBDCs allow members of the general public to make electronic payments of all kinds with a liability of the central bank. This feature–making electronic transactions using a liability of the Federal Reserve–is central to why Congress should make sure that the Fed never issues a retail CBDC. The problem is that the federal government, not privately owned commercial banks, would be responsible for issuing deposits. And while this fact might seem like a feature instead of bug, it's a major problem for anything that resembles a free society. The problem is that there is no limit to the level of control that the government could exert over people if money is purely electronic and provided directly by the government. A CBDC would give federal officials full control over the money going into–and coming out of–every person's account. This level of government control is not compatible with economic or political freedom.
Note: The above was written by Norbert Michel, Vice President and Director of the Cato Institute's Center for Monetary and Financial Alternatives. For more along these lines, see concise summaries of deeply revealing news articles on financial system corruption from reliable major media sources.
The digital Covid vaccination certification, or "passport," is a mobile app that instantaneously affirms the vaccinated status, Covid test results, birth date, gender, and/or other identifiers of its holder. The information is usually mosaicked in a QR code, read by a proprietary scanner, and linked to a government registry. Led by New York, California, and Louisiana, as many as 30 states are rolling them out. The Biden administration announced last spring that it would wrangle them under national standards but so far it hasn't. Internationally, the EU and a growing number of countries are adopting them, from repressive regimes like Bahrain to democracies like Denmark. Twenty U.S. states have banned the passes, and hashtags like #NoVaccinePassports are proliferating on both sides of the Atlantic. "Spoiler alert," tweeted British DJ ... Lange. "They are not planning on removing vax passports once introduced. This is just the first step to get you conditioned to accepting government restrictions in your daily life via your mobile phone. This digital ID is going to expand to all aspects of your life." Evidence supports the detractors' suspicions. Every government introducing a vaccine certification vows that their use is voluntary and no personal information will be held beyond its necessity. But governments are far from unanimous even on such basics as ... how long and by whom our intimate information will be held, owned, or overseen. New York, for one, is not expecting to mothball the technology when Covid wanes.
Note: For more along these lines, see concise summaries of deeply revealing news articles on coronavirus vaccines and the disappearance of privacy from reliable major media sources.
The federal government has ramped up security and police-related spending in response to the coronavirus pandemic, including issuing contracts for riot gear, disclosures show. The purchase orders include requests for disposable cuffs, gas masks, ballistic helmets, and riot gloves, along with law enforcement protective equipment for federal police assigned to protect Veterans Affairs facilities. The orders were expedited under a special authorization “in response to Covid-19 outbreak.” “Between 2005 and 2014, VA police departments acquired millions of dollars’ worth of body armor, chemical agents, night vision equipment, and other weapons and tactical gear,” The Intercept reported last year. But an Inspector General report in December 2018 found there was little oversight. The CARES Act, the $2.2 trillion stimulus legislation passed in late March, also authorized $850 million for the Coronavirus Emergency Supplemental Funding program, a federal grant program to prepare law enforcement, correctional officers, and police for the crisis. The funds have been dispensed to local governments to pay for overtime costs, purchase protective supplies, and defray expenses related to emergency policing. The grants may also be used for the purchase of unmanned aerial aircraft and video security cameras for law enforcement. Motorola Solutions, a major supplier of police technology, has encouraged local governments to use the new money to buy a range of command center software and video analytics systems.
Note: For more along these lines, see concise summaries of deeply revealing news articles on the coronavirus and the disappearance of privacy from reliable major media sources.
India has just 144 police officers for every 100,000 citizens. In recent years, authorities have turned to facial recognition technology to make up for the shortfall. India's government now ... wants to construct one of the world's largest facial recognition systems. The project envisions a future in which police from across the country's 29 states and seven union territories would have access to a single, centralized database. The daunting scope of the proposed network is laid out in a detailed 172-page document published by the National Crime Records Bureau, which requests bids from companies to build the project. The project would match images from the country's growing network of CCTV cameras against a database encompassing mug shots of criminals, passport photos and images collected by [government] agencies. It would also recognize faces on closed-circuit cameras and "generate alerts if a blacklist match is found." Security forces would be equipped with hand-held mobile devices enabling them to capture a face in the field and search it instantly against the national database, through a dedicated app. For privacy advocates, this is worrying. "India does not have a data protection law," says [Apar] Gupta [of the Internet Freedom Foundation]. "It will essentially be devoid of safeguards." It might even be linked up to Aadhaar, India's vast biometric database, which contains the personal details of 1.2 billion Indian citizens, enabling India to set up "a total, permanent surveillance state," he adds.
Note: Read an excellent article by The Civil Liberties Union for Europe about the 7 biggest privacy issues that concern facial recognition technology. For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
Apple contractors regularly hear confidential medical information, drug deals, and recordings of couples having sex, as part of their job providing quality control, or “grading”, the company’s Siri voice assistant. Although Apple does not explicitly disclose it in its consumer-facing privacy documentation, a small proportion of Siri recordings are passed on to contractors working for the company around the world. Apple says the data “is used to help Siri and dictation ... understand you better and recognise what you say”. But the company does not explicitly state that that work is undertaken by humans who listen to the pseudonymised recordings. A whistleblower working for the firm, who asked to remain anonymous due to fears over their job, expressed concerns about this lack of disclosure, particularly given the frequency with which accidental activations pick up extremely sensitive personal information. The whistleblower said: “There have been countless instances of recordings featuring private discussions. These recordings are accompanied by user data showing location, contact details, and app data.” Although Siri is included on most Apple devices, the contractor highlighted the Apple Watch and the company’s HomePod smart speaker as the most frequent sources of mistaken recordings. As well as the discomfort they felt listening to such private information, the contractor said they were motivated to go public about their job because of their fears that such information could be misused.
Note: For more along these lines, see concise summaries of deeply revealing news articles on the disappearance of privacy from reliable major media sources.
Apple and Google on Wednesday released long-awaited smartphone technology to automatically notify people if they might have been exposed to the coronavirus. The companies said 22 countries and several U.S. states are already planning to build voluntary phone apps using their software. It relies on Bluetooth wireless technology to detect when someone who downloaded the app has spent time near another app user who later tests positive for the virus. Many governments have already tried, mostly unsuccessfully, to roll out their own phone apps to fight the spread of the COVID-19 pandemic. Many of those apps have encountered technical problems on Apple and Android phones and haven't been widely adopted. They often use GPS to track people's location, which Apple and Google are banning from their new tool because of privacy and accuracy concerns. Public health agencies from Germany to the states of Alabama and South Carolina have been waiting to use the Apple-Google model, while other governments have said the tech giants' privacy restrictions will be a hindrance because public health workers will have no access to the data. The companies said they're not trying to replace contact tracing, a pillar of infection control that involves trained public health workers reaching out to people who may have been exposed to an infected person. But they said their automatic "exposure notification" system can augment that process and slow the spread of COVID-19.
Note: Watch an excellent video explanation of the dangers of contract tracing by a woman who applied to do this work. She shows how the claims of it being voluntary are far from the truth. For more along these lines, see concise summaries of deeply revealing news articles on the coronavirus and the disappearance of privacy from reliable major media sources.
The CIA's chief technology officer outlined the agency's endless appetite for data in a far-ranging speech. Ira "Gus" Hunt said that the world is increasingly awash in information from text messages, tweets, and videos - and that the agency wants all of it. "The value of any piece of information is only known when you can connect it with something else that arrives at a future point in time," Hunt said. "Since you can't connect dots you don't have, it drives us into a mode of, we fundamentally try to collect everything and hang on to it forever." Hunt's comments come two days after Federal Computer Week reported that the CIA has committed to a massive, $600 million, 10-year deal with Amazon for cloud computing services. "It is really very nearly within our grasp to be able to compute on all human generated information," Hunt said. After that mark is reached, Hunt said, the agency would also like to be able to save and analyze all of the digital breadcrumbs people don't even know they are creating. "You're already a walking sensor platform," he said, noting that mobiles, smartphones and iPads come with cameras, accelerometers, light detectors and geolocation capabilities. "Somebody can know where you are at all times, because you carry a mobile device, even if that mobile device is turned off," he said. Hunt also spoke of mobile apps that will be able to control pacemakers - even involuntarily - and joked about a "dystopian" future. Hunt's speech barely touched on privacy concerns.
Note: The Internet of Things makes mass surveillance even easier. For more along these lines, see concise summaries of deeply revealing news articles on intelligence agency corruption and the disappearance of privacy from reliable major media sources.
Air fryers that gather your personal data and audio speakers "stuffed with trackers" are among examples of smart devices engaged in "excessive" surveillance, according to the consumer group Which? The organisation tested three air fryers ... each of which requested permission to record audio on the user's phone through a connected app. Which? found the app provided by the company Xiaomi connected to trackers for Facebook and a TikTok ad network. The Xiaomi fryer and another by Aigostar sent people's personal data to servers in China. Its tests also examined smartwatches that it said required "risky" phone permissions – in other words giving invasive access to the consumer's phone through location tracking, audio recording and accessing stored files. Which? found digital speakers that were preloaded with trackers for Facebook, Google and a digital marketing company called Urbanairship. The Information Commissioner's Office (ICO) said the latest consumer tests "show that many products not only fail to meet our expectations for data protection but also consumer expectations". A growing number of devices in homes are connected to the internet, including camera-enabled doorbells and smart TVs. Last Black Friday, the ICO encouraged consumers to check if smart products they planned to buy had a physical switch to prevent the gathering of voice data.
Note: A 2015 New York Times article warned that smart devices were a "train wreck in privacy and security." For more along these lines, read about how automakers collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people's "sexual activities" when drivers pair their smartphones to their vehicles.
Big tech companies have spent vast sums of money honing algorithms that gather their users' data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call "algorithmic personalized pricing," which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: "surveillance pricing." In July the FTC sent information-seeking orders to eight companies that "have publicly touted their use of AI and machine learning to engage in data-driven targeting," says the agency's chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. "Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores," [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart–which is not being probed by the FTC–says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more–and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower's risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
A US federal appeals court ruled last week that so-called geofence warrants violate the Fourth Amendment's protections against unreasonable searches and seizures. Geofence warrants allow police to demand that companies such as Google turn over a list of every device that appeared at a certain location at a certain time. The US Fifth Circuit Court of Appeals ruled on August 9 that geofence warrants are "categorically prohibited by the Fourth Amendment" because "they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search." In other words, they're the unconstitutional fishing expedition that privacy and civil liberties advocates have long asserted they are. Google ... is the most frequent target of geofence warrants, vowed late last year that it was changing how it stores location data in such a way that geofence warrants may no longer return the data they once did. Legally, however, the issue is far from settled: The Fifth Circuit decision applies only to law enforcement activity in Louisiana, Mississippi, and Texas. Plus, because of weak US privacy laws, police can simply purchase the data and skip the pesky warrant process altogether. As for the appellants in the case heard by the Fifth Circuit, well, they're no better off: The court found that the police used the geofence warrant in "good faith" when it was issued in 2018, so they can still use the evidence they obtained.
Note: Read more about the rise of geofence warrants and its threat to privacy rights. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Meredith Whittaker practises what she preaches. As the president of the Signal Foundation, she's a strident voice backing privacy for all. In 2018, she burst into public view as one of the organisers of the Google walkouts, mobilising 20,000 employees of the search giant in a twin protest over the company's support for state surveillance and failings over sexual misconduct. The Signal Foundation ... exists to "protect free expression and enable secure global communication through open source privacy technology". The criticisms of encrypted communications are as old as the technology: allowing anyone to speak without the state being able to tap into their conversations is a godsend for criminals, terrorists and paedophiles around the world. But, Whittaker argues, few of Signal's loudest critics seem to be consistent in what they care about. "If we really cared about helping children, why are the UK's schools crumbling? Why was social services funded at only 7% of the amount that was suggested to fully resource the agencies that are on the frontlines of stopping abuse? Signal either works for everyone or it works for no one. Every military in the world uses Signal, every politician I'm aware of uses Signal. Every CEO I know uses Signal because anyone who has anything truly confidential to communicate recognises that storing that on a Meta database or in the clear on some Google server is not good practice."