Facebook, Google and Twitter are stepping up efforts to combat online propaganda and recruiting by militants and extremists, but the online companies are doing it quietly to avoid the perception that they are helping the authorities police the Internet.
On Friday, Facebook said it took down a profile that the company believed belonged to San Bernardino shooter Tashfeen Malik, who with her husband is accused of killing 14 people in a mass shooting that the FBI is investigating as an "act of terrorism", Reuters reports.
Just a day earlier, the French prime minister and European Commission officials met separately with Facebook, Google, Twitter and other companies to demand faster action on what the commission called "online terrorism incitement and hate speech".
The Internet companies described their policies as straightforward: They ban certain types of content in accordance with their own terms of service and require court orders to remove or block anything beyond that.
Anyone can report, or flag, content for review and possible removal.
But the truth is far more subtle and complicated. According to former employees, Facebook, Google and Twitter all worry that if they are public about their true level of cooperation with western law enforcement agencies, they will face endless demands for similar action from other countries.
They also fret about being perceived by consumers as being tools of the government. Worse, if the companies spell out exactly how their screening works, they run the risk that technologically savvy militants will learn more about how to beat their systems.
"If they knew what magic sauce went into pushing content into the newsfeed, spammers or whoever would take advantage of that," said a security expert who had worked at both Facebook and Twitter, who asked not to be identified because of the sensitivity of the issue.
One of the most significant yet least understood aspects of the propaganda issue is the range of ways in which social media companies deal with government officials.
Facebook, Google and Twitter say they do not treat government complaints differently from citizen complaints, unless the government obtains a court order. The trio are among a growing number that publish regular transparency reports summarizing the number of formal requests from officials about content on their sites.
But there are workarounds, according to former employees, activists and government officials.
A key one is for officials or their allies to complain that a threat, hate speech or celebration of violence violates the company's terms of service, rather than any law. Such content can be taken down within hours or minutes, and without the paper trail that would go with a court order.
"It is commonplace for federal authorities to directly contact Twitter and ask for assistance, rather than going through formal channels," said an activist who has helped get numerous accounts disabled.
In the San Bernardino case, Facebook said it took down Malik's profile, established under an alias, for violating its community standards, which prohibit praise or promotion of "acts of terror". The spokesman said there was content favorable to the Islamic State militant group on the page but declined to elaborate.
What law enforcement, politicians and some activists would really like is for Internet companies to stop banned content from being shared in the first place. But that would pose a tremendous technological challenge, as well as an enormous policy shift, former executives said.
Some child pornography can be blocked because the technology companies have access to a database that identifies previously known images. A similar type of system is in place for copyrighted music.
There is no database for videos of violent acts and the same footage that might violate a social network's terms of service if uploaded by an anonymous militant might pass if it were part of a news broadcast.
Nicole Wong, who previously served as the White House's deputy chief technology officer, said tech companies would be reluctant to create a database of videos posted by terrorists, even if it could be kept current enough to be relevant, for fear that repressive governments would demand such setups to pre-screen any content they do not like.
"Technology companies are rightfully cautious because they are global players, and if they build it for one purpose they don't get to say it can't be used for anything else [like blocking dissidents]," said Wong, a former Twitter and Google legal executive.