Facebook Inc.'s artificial intelligence know-how could be applied to some of its most pressing
problems, company executives said, if the social network creates policies to guide use of the technology.
Yann LeCun, Facebook's director of artificial intelligence, or AI, research, said technology could be
used to help stamp out fake news or detect violence in live videos by filtering the content on the site. But Facebook's
policy and product teams haven't figured out how to introduce AI responsibly.
"What's the trade-off between filtering and censorship? Freedom of experience and decency?" Mr. LeCun told reporters
during a recent round table at the company's Menlo Park, Calif., headquarters. "The technology either exists or can be
developed. But then the question is how does it make sense to deploy it? And this isn't my department."
Facebook is trying to remove some of the stigma and mystery that surrounds AI in popular culture. On Thursday, it
released six informational videos about the technology. Mr. LeCun said AI is integral to the company's operations, from
learning how users experience their news feed to monitoring the site for terrorist propaganda.
How Facebook could use AI to prevent the spread of false information— a criticism Facebook faced following the
U.S. presidential election—is unclear. Facebook uses AI to detect certain words that signal a story might be
simply "clickbait." Discerning fact from fiction is a much bigger challenge, posing the risk of removing too much
content with an AI filter.
Facebook doesn't have fully formed solutions—with or without AI—to these problems, a spokesman later said.
The company often experiments with a technology before deciding whether it will apply it widely.
After initially dismissing the problem of fake news, Chief Executive Mark Zuckerberg two weeks ago laid out several
steps Facebook is taking to tackle the issue—including building systems to detect fake stories before users flag
them, which would involve AI. "Tens" of employees have been pulled off other projects to focus on fake news, people
familiar with the matter say.
However, AI isn't a panacea. It doesn't catch all terrorist propaganda, for example.
Facebook, which employs hundreds of people world-wide to monitor content on the site, is now in the "research stage"
of using AI to automatically detect depictions of violence and other problems in live videos, said Joaquin Candela,
Facebook's director of applied machine learning.
Policing live video, an area of intense investment for Facebook, poses two challenges, he added. First, it requires a
very fast computer vision algorithm, which Mr. Candela said was within reach of his team.
The second challenge is developing a clear set of practices, such as for determining what should or shouldn't be
removed. In general, the task of figuring out whether and how to introduce the technology is handled by Facebook product
teams, Mr. Candela said.
Facebook said a lot of its network wouldn't work without AI, such as its news-feed ranking algorithm, which creates
individualized streams for each of the 1.79 billion people who access Facebook at least once a month. Every day, 2.5
billion posts are translated into other languages on Facebook.
Mr. LeCun said he disagreed with the portrayal of AI technology as a looming and devious threat. "This isn't magic.
This isn't Terminator either," Mr. LeCun said. "This is real technology that could be useful."
Mr. LeCun said the ethical questions his team considers deserve more attention, such as how AI can be properly tested
without causing harm and how it can be designed to avoid systematic bias. "Is humanoid AI going to take over the world
and kill us all? I'm not personally worried about that," he said.
Write to Deepa Seetharaman at Deepa.Seetharaman@wsj.com
(END) Dow Jones Newswires
Copyright (c) 2016 Dow Jones & Company, Inc.