Under U.S. law, social media companies have generally been understood to enjoy the same broad leeway as traditional media in deciding whose views to air — and whose they’d rather not. Initially more laissez-faire, Facebook, Twitter, YouTube and others have taken on increasing responsibility over the years for monitoring their networks for misinformation, harassment, hate speech and propaganda campaigns, a function they call “content moderation.” They’ve done so largely in response to pressure from the public, the media and their own workers — not the U.S. government.
Source link