By Yoree Koh And Reed Albergotti
Twitter Inc. this week appeared to be taking its most aggressive steps ever to scrub sensitive content from its
site as it scrambled to remove grisly images showing the beheading of a U.S. journalist.
In reality, the company is maintaining the same reactive approach that has allowed violent and pornographic content
to proliferate on the social-media service.
Like its social-media brethren, Twitter tries to walk a fine line between a dedication to free expression, an
aversion to being held legally responsible for the actions of its users, and the reality that people will use such
platforms to share highly offensive material--a particular concern as it seeks to expand its advertising business.
Twitter's broad free-speech stance came into focus Tuesday, when the company said immediate family members could
email a special address to request the removal of images and videos of deceased individuals.
Twitter said it made the decision "in order to respect the wishes of loved ones" after doctored photos of Robin
Williams's death prompted his daughter Zelda to leave the service. The move appeared to hint that Twitter, in a special
case, would actively try to stamp out sensitive content, rather than wait for users to flag each offending tweet as it
has been doing.
Tuesday afternoon, at the request of James Foley's family, Twitter began taking down gruesome images and video
depicting the Islamic State's beheading of the American journalist. Twitter CEO Dick Costolo tweeted to his 1.2 million
followers, "We have been and are actively suspending accounts as we discover them related to this graphic imagery. Thank
More than 24 hours later many of the images remain in circulation, angering some users. At first glance, it might
appear that Twitter's staff is playing a game of cat and mouse as it tries to delete controversial content amid the
service's half-a-billion tweets a day.
But Twitter is actually sticking to its basic principles. Even after a family request, it's the company's policy
not to hunt for content. Instead, it relies on users to flag a tweet as inappropriate, at which point, if the tweet
violates its rule, Twitter will disable the unique Web address associated with the image. But users can easily upload
the image using a different account and a new address, as has happened with images of Mr. Foley's execution. Other
images remain if they're not flagged.
It is unclear how many and at what pace users flagged the grisly images of Mr. Foley's murder to Twitter and how
quickly the service was able to respond. Twitter declined to say.
Had Twitter not announced the family policy hours earlier, it might not have taken any action. That's because
Twitter doesn't ban violent content such as beheadings on its own, just violent images it deems to be direct threats to
For example, a video showing St. Louis police officers fatally shooting a 25-year-old man outside a convenience
store on Tuesday is still up on Twitter. Twitter declined to say whether it has received requests from Kajieme Powell's
family to remove the video.
Twitter has removed images of beheadings in the past, but in instances where the photos of severed heads were
directed at users as threats. In a statement, Twitter said: "We evaluate and refine our policies based on input from
users, while working with outside organizations to ensure that we have industry best practices in place."
As the self-described "free speech wing of the free speech party," Twitter has been home to the profound and
profane. The company allows pornographic images in tweets if they aren't used to harass other users. Twitter introduced
a measure last year, marking some "sensitive" images with a warning.
But as Twitter's user base has swelled to 271 million monthly active users, the company may be reaching a point
where it can no longer remain the Wild West of social media without irking too many members. Responding to the viral
nature of the images of Mr. Foley's death, the hashtag #ISISMediaBlackout, which urged users to stop sharing the images,
trended on Twitter earlier this week.
Twitter's policy concerning violence is more liberal than Google Inc.'s YouTube, which itself has been trying to
suspend the accounts of those posting video of Mr. Foley's beheading. YouTube prohibits "gratuitous violence," including
hate speech and incitement to commit violent acts, but it does allow violence if it isn't designed simply to shock or be
disrespectful. Like Twitter, YouTube takes action only when users flag the content.
YouTube also bans designated foreign terrorist organizations from having registered accounts. A Twitter spokesman
said a group's status as a terrorist group is "one of several factors" it considers when deciding whether to suspend an
Facebook has had its own battles with violent content. In October, Facebook faced international criticism for
allowing its users to share several videos depicting beheadings. Facebook had initially allowed the videos because they
were being used to condemn violence, but it removed them following the public outcry.
After the controversy over the beheadings, Facebook amended its policies, adding more requirements for users who
want to share graphic content. The photos or videos should include safeguards that prevent minors from viewing them,
warnings about the nature of the content and edits to remove excessively graphic content.
Facebook's efforts to block images and videos of Mr. Foley's execution have largely been successful, according to a
company spokeswoman. Like Twitter, Facebook only blocks content reported by users that violates its rules, but it also
uses technology to identify offending videos or photos across the site. Facebook also removes content tied to
individuals or groups deemed to promote terrorism.
Twitter declines to comment on how it decides which accounts to suspend and which content to remove. Every report
by a user is reviewed by a member of Twitter's Trust and Safety team. Depending on the case, Twitter's lawyers may get
involved. Twitter tells the user who flagged the content what it ultimately has decided to do but doesn't disclose the
reasons for its decision.
"In this case, I don't think Twitter has done enough to be consistent and transparent," said Jillian York, the
director for international freedom of expression at the Electronic Frontier Foundation. "They need to be more
transparent about what goes against their terms. It's unclear what they're taking that content down under."
Still, others are less critical of Twitter's overall methods to manage its content. "Active policing of tweets is
tricky, and I think Twitter has made the right decision in saying that it will not do that," said Jonathan Zittrain, a
professor at Harvard Law School. But he recognizes the limitations: "The fact is that unless Twitter's membership were
to be circumscribed, shutting down an abusive user account is a necessarily limited remedy: An aggressor can sign up
again and take up where he or she left off."
Mr. Zittrain said "the best approach [for Twitter] may ultimately be to sift garden-variety rudeness from genuinely
But to do this, Twitter would require more tools. It uses PhotoDNA, a Microsoft image-tagging technology, to limit
the spread of child-sexual-exploitation images, as does Facebook. Besides that, it doesn't actively monitor user
It could eventually get some help from Madbits, a New York startup it acquired last month. Madbits's image-search
technology is designed to detect context in photos, and Twitter may be able to use the Madbits technology to track
Write to Yoree Koh at firstname.lastname@example.org and Reed Albergotti at email@example.com
Subscribe to WSJ: http://online.wsj.com?mod=djnwires
(END) Dow Jones Newswires
Copyright (c) 2014 Dow Jones & Company, Inc.