AI, surveillance, spying, databasing, secrecy, censorship, precrime, expanding... you're 0wn3d mate. https://www.nytimes.com/2019/09/17/technology/facebook-hate-speech-extremism... https://newsroom.fb.com/news/2019/09/combating-hate-and-extremism/ https://gifct.org/press/actions-address-abuse-technology-spread-terrorist-an... Facebook on Tuesday announced a series of changes to limit hate speech and extremism on the social network, expanding its definition of terrorist organizations and planning to deploy artificial intelligence to better spot and block live videos of shooters. The company is also expanding a program that redirects users searching for extremism to resources intended to help them leave hate groups behind. The New York Times reports: The announcement came the day before a hearing on Capitol Hill on how Facebook, Google and Twitter handle violent content. Lawmakers are expected to ask executives how they are handling posts from extremists. In its announcement post, Facebook said the Christchurch tragedy "strongly" influenced its updates. And the company said it had recently developed an industry plan with Microsoft, Twitter, Google and Amazon to address how technology is used to spread terrorist accounts. Facebook said that it had mostly focused on identifying organizations like separatists, Islamist militants and white supremacists. The company said that it would now consider all people and organizations that proclaim or are engaged in violence leading to real-world harm. The team leading its efforts to counter extremism on its platform has grown to 350 people, Facebook said, and includes experts in law enforcement, national security, counterterrorism and academics studying radicalization. To detect more content relating to real-world harm, Facebook said it was updating its artificial intelligence to better catch first-person shooting videos. The company said it was working with American and British law enforcement officials to obtain camera footage from their firearms training programs to help its A.I. learn what real, first-person violent events look like. https://apnews.com/d1f77d3dd2684d7e8d7d47cbd192d8dd A growing number of countries are following China's lead in deploying artificial intelligence to track citizens. The Carnegie Endowment for International Peace says at least 75 countries are actively using AI tools such as facial recognition for surveillance. The index of countries where some form of AI surveillance is used includes liberal democracies such as the United States and France as well as more autocratic regimes. Relying on a survey of public records and media reports, the report says Chinese tech companies led by Huawei and Hikvision are supplying much of the AI surveillance technology to countries around the world. Other companies such as Japan's NEC and U.S.-based IBM, Palantir and Cisco are also major international providers of AI surveillance tools. Hikvision declined comment Tuesday. The other companies mentioned in the report didn't immediately return requests for comment. The report encompasses a broad range of AI tools that have some public safety component. The group's index doesn't distinguish between legitimate public safety tools and unlawful or harmful uses such as spying on political opponents. "I hope citizens will ask tougher questions about how this type of technology is used and what type of impacts it will have," said the report's author, Steven Feldstein, a Carnegie Endowment fellow and associate professor at Boise State University. Many of the projects cited in Feldstein's report are "smart city" systems in which a municipal government installs an array of sensors, cameras and other internet-connected devices to gather information and communicate with one another.