Connect with us

Tech

UK to criminalize deepfake porn sharing without consent • TechCrunch

Published

on

Brace for yet another expansion to the UK’s Online Safety Bill: The Ministry of Justice has announced changes to the law which are aimed at protecting victims of revenge porn, pornographic deepfakes and other abuses related to the taking and sharing of intimate imagery without consent — in a crackdown on a type of abuse that disproportionately affects women and girls.

The government says the latest amendment to the Bill will broaden the scope of current intimate image offences — “so that more perpetrators will face prosecution and potentially time in jail”.

Other abusive behaviors that will become explicitly illegal include “downblousing” (where photographs are taken down a women’s top without consent); and the installation of equipment, such as hidden cameras, to take or record images of someone without their consent.

The government describes the planned changes as a comprehensive package of measure to modernize laws in this area.

It’s also notable as it’s the first time it has criminalized the sharing of deepfakes.

Increasingly accessible and powerful image- and video-generating AIs have led to a rise in deepfake porn generation and abuse, driving concern about harms linked to this type of AI-enabled technology.

Just this week, the Verge reported that the maker of the open source AI text-to-image generator Stable Diffusion had tweaked the software to make it harder for users to generate nude and pornographic imagery — apparently responding to the risk of the generative AI tech being used to create pornographic images of child abuse material.

But that’s just one example. Many more tools for generating pornographic deepfakes remain available.

From revenge porn to deepfakes

While the UK passed a law against revenue porn back in 2015 victims and campaigners have been warning for years that the regime isn’t working and applying pressure for a rethink.

This has led to some targeted changes over the years. For example, the government made ‘upskirting’ illegal via a change to the law that came into force back in 2019. While, in March, it said ‘cyberflashing’ would be added as an offence to the incoming online safety legislation.

However it has now decided further amendments are needed to expand and clarify offences related to intimate images in order to make it easier for police and prosecutors to pursue cases and to ensure legislation keeps pace with technology.

It’s acting on several Law Commission recommendations in its 2021 review of intimate image abuse.

This includes repealing and replacing current legislation with new offences the government believes will low the bar for successful prosecutions, including a new base offence of sharing an intimate image without consent (so in this case there won’t be a requirement to prove intent to cause distress); along with two more serious offences based on intent to cause humiliation, alarm, or distress and for obtaining sexual gratification.

The planned changes will also create two specific offences for threatening to share and installing equipment to enable images to be taken; and criminalize the non-consensual sharing of manufactured intimate images (aka deepfakes).

The government says around 1 in 14 adults in England and Wales have experienced a threat to share intimate images, with more than 28,000 reports of disclosing private sexual images without consent recorded by police between April 2015 and December 2021.

It also points to the rise in abusive deepfake porn — noting one example of a website that virtually strips women naked receiving 38 million hits in the first eight months of 2021.

A growing number of UK lawmakers and campaign groups have been calling for a ban on the use of AI to nudify women since abusive use of the tech emerged — as this BBC report into one such site, called DeepSukebe, reported last year.

Commenting on the planned changes in a statement, deputy prime minister and justice secretary, Dominic Raab, said:

We must do more to protect women and girls, from people who take or manipulate intimate photos in order to hound or humiliate them.

Our changes will give police and prosecutors the powers they need to bring these cowards to justice and safeguard women and girls from such vile abuse.

Under the government’s plan, the new deepfake porn offences will put a legal duty on platforms and services that fall under incoming online safety legislation to remove this type of material if it’s been shared on their platforms without consent — with the risk of serious penalties, under the Online Safety Bill, if they fail to remove illegal content.

Victims of revenge porn and other intimate imagery abuse have complained for years over the difficulty and disproportionate effort required on their part to track down and report images that have been shared online without their consent.

Ministers argue the proposed changes to UK law will improve protections for victims in this area.

Commenting in another supporting statement, DCMS secretary of state, Michelle Donelan, said:

Through the Online Safety Bill, I am ensuring that tech firms will have to stop illegal content and protect children on their platforms but we will also upgrade criminal law to prevent appalling offences like cyberflashing.

With these latest additions to the Bill, our laws will go even further to shield women and children, who are disproportionately affected, from this horrendous abuse once and for all.

One point to note is that the Online Safety Bill remains on pause while the government works on drafting amendments related to another aspect of the legislation.

The government has denied this delay will derail the bill’s passage through parliament —  but there’s no doubt parliamentary time is tight. So it’s unclear when (or even whether) the bill will actually become UK law, given there’s only around two years left before a General Election must be called.

Additionally, parliamentary time must also be found to make the necessary changes to UK law on intimate imagery abuse.

The government has offered no timetable for that component as yet — saying only that it will bring forward this package of changes “as soon as parliamentary time allows”, and adding that it will announce further details “in due course”.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Elon Musk vicariously publishes internal emails from Twitter’s Hunter Biden laptop drama • TechCrunch

Published

on

Elon Musk reminded his followers on Friday that owning Twitter now means he controls every aspect of the company — including what its employees said behind closed doors before he took over.

Earlier this week, Musk teased the release of what he called “The Twitter Files,” declaring that the public “deserves to know what really happened” behind the scenes during Twitter’s decision to stifle a story about Hunter Biden back in 2020.

On Friday evening, Musk delivered, sort of. Twitter’s new owner shared a thread from author and Substack writer Matt Taibbi who is apparently now in possession of the trove of internal documents, which he opted to painstakingly share one tweet at a time, in narrative form.

Taibbi noted on his Substack that he had to “agree to certain conditions” in order to land the story, though he declined to elaborate about what the conditions were. (We’d suspect that sharing the documents in tweet form to boost the platform’s engagement must have been on the list.)

Taibbi’s decision to reveal a selection of the documents one tweet at a time was apparently not painstaking enough. One screenshot, now deleted, published Jack Dorsey’s private personal email address. Another shared an unredacted personal email belonging to Rep. Ro Khanna (D-CA), who expressed concerns about Twitter’s action at the time. Both incidents appear to run afoul of Twitter’s anti-doxing policy.

The documents, which are mostly internal Twitter emails, depict the chaotic situation that led Twitter to censor a New York Post story about Hunter Biden two years ago. In October 2020, The New York Post published a story that cited materials purportedly obtained from a laptop that the younger Biden left at a repair shop. With a presidential election around the corner and 2016’s hacked DNC emails and other Russian election meddling fresh in mind, Twitter decided to limit the story’s reach.

In conversation with members of Twitter’s comms and policy teams, Twitter’s former Head of Trust and Safety Yoel Roth cited the company’s rules about hacked materials and noted the “severe risks and lessons of 2016” that influenced the decision making.

One member of Twitter’s legal team wrote that it was “reasonable” for Twitter to assume that the documents came from a hack, adding that “caution is warranted.” “We simply need more information,” he wrote.

In his Twitter thread, Taibbi characterized the situation to make such a consequential enforcement decision without consulting the company’s CEO as unusual. In reality, then-CEO Jack Dorsey was well known for being hands-off at the company, at times working remotely from a private island in the South Pacific and delegating even high profile decisions to his policy team.

After Twitter acted, the response from outside the company was swift — and included one Democrat, apparently. “… In the heat of a Presidential campaign, restricting dissemination of newspaper articles (even if NY Post is far right) seems like it will invite more backlash than it will do good,” Khanna wrote to a member of Twitter’s policy team.

At the time, Facebook took similar measures. But Twitter was alone in its unprecedented decision to block links to the story, ultimately inciting a firestorm of criticism that the website was putting a thumb on the scale for Democrats. The company, its former CEO and some policy executives have since described the incident as a mistake made out of an over-abundance of caution — a story that checks out in light of the newly published emails.

Musk hyped the release of the emails as a smoking gun, but they mostly tell us what we already knew: that Twitter, fearful of a repeat of 2016, took an unusual moderation step when it probably should have provided context and let the story circulate. Musk has apparently stewed over the issue since at least April when he called the decision to suspend the Post’s account “incredibly inappropriate.”

Files from the laptop would later be verified by other news outlets, but in the story’s early days no one was able to corroborate that the documents were real and not manipulated, including social platforms. “Most of the data obtained by The Post lacks cryptographic features that would help experts make a reliable determination of authenticity, especially in a case where the original computer and its hard drive are not available for forensic examination,” the Washington Post wrote in its own story verifying the emails. The decision inspired Twitter to change its rules around sharing hacked materials.

Twitter’s former Head of Trust and Safety Yoel Roth shared more insight about the decision in an interview earlier this week, noting that the story set off “alarm bells” signaling that it might be a hack and leak campaign by Russian group APT28, also known as Fancy Bear. “Ultimately for me, it didn’t reach a place where I was comfortable removing this content from Twitter,” Roth said.

Dorsey admitted fault at the time in a roundabout way. “Straight blocking of URLs was wrong, and we updated our policy and enforcement to fix,” Dorsey tweeted. “Our goal is to attempt to add context,” he said, adding that now the company could do that by labeling hacked materials.

Musk has been preoccupied with a handful of specific content moderation decisions since before deciding to buy the company. His frustration that Twitter suspended the conservative satire site The Babylon Bee over a transphobic tweet appears to be the reason he even decided to buy Twitter to begin with.

Now two years after it happened, the Hunter Biden social media controversy is still a sore spot for conservatives, right wing media and Twitter’s new ownership. The platform’s past policy controversies are mostly irrelevant now with Musk at the wheel, but he apparently still has an axe to grind with the Twitter of yore — and we’re seeing that unfold in real(ish) time.

Source link

Continue Reading

Tech

REPORT: Rep. Ro Khanna Was The Only Democrat To Raise Issue With Twitter Nuking Biden Laptop Story

Published

on

Rep. Ro Khanna of California was the only Democrat concerned that Twitter was violating the First Amendment when the social media platform suppressed the New York Post’s Hunter Biden laptop story, Matt Taibbi reported Friday.

Taibbi, a Rolling Stone contributing editor, released what Elon Musk called “The Twitter Files,” Friday afternoon. In the Twitter thread, Taibbi shared an email showing that Khanna reportedly contacted Vijaya Gadde, former general counsel and head of legal, policy, and trust at Twitter.

“Democratic congressman Ro Khanna reaches out to Gadde to gently suggest she hop on the phone to talk about the ‘backlash re speech,’” Taibbi tweeted. “Khanna was the only Democratic official I could find in the files who expressed concern.”

The documents Taibbi posted show that Gadde apparently replied to Khanna, explaining that Twitter released a clarifying thread of tweets previously that day that explained the policy around posting private information on “hacked materials.” The document further showed that Trump Press Secretary Kayleigh McEnany’s account was not permanently suspended but she would need to delete “the tweet containing material that is in violation” of Twitter’s rules.

Khanna then explained that he is more concerned about First Amendment principles, even as a “Biden partisan,” according to the documents posted by Taibbi.

“If there is a hack of classified information or other information that could expose a serious war crime and the NYT was to publish it, I think the NYT should have that right,” he reportedly told Gadde, according to Taibbi’s tweet. (RELATED: ‘Absolutely Shocking’: Fox News Contributor Reacts To ‘Coordinated Effort’ By Former Twitter Execs)

Carl Szabo from NetChoice, a research company, reportedly let Twitter know a “blood bath” was awaiting them in upcoming Capitol Hill hearings, according to Taibbi’s Twitter thread. Szabo allegedly explained that Democrats agreed that “social media needs to moderate more.”

“Ro Khanna is great,” current CEO of Twitter Elon Musk replied to Taibbi.

Source link

Continue Reading

Tech

The era of constant innovation at Amazon could be over • TechCrunch

Published

on

There was a time when AWS re:Invent, the yearly customer extravaganza put on by Amazon’s cloud arm, was chock full of announcements. The innovation coming out of the company was so mind-boggling that it was hard to keep up with the onslaught of news.

But this year felt different. If last year was incremental, this year was downright slow when it came to meaningful news.

To give you a sense of our coverage here at TechCrunch, last year, we wrote 28 stories about the event. This year, it’s down to 18, including this one. It’s not that we wanted to write less — we just simply found there was less relevant news to write about.

The day two AI and machine learning keynote was all incremental improvements to existing products. There were so few meaningful announcements that my colleague Frederic Lardinois wrote a post in pictures mocking the lack of news.

It’s gotten to the point, it seems, where the ecosystem has grown so enormous, and there are so many products, that the company has decided to focus on making it easier to work with and between those products (or with external partner products) than creating stuff from scratch.

From a news perspective, that means that there’s really less to write about. Eight new SageMaker capabilities or five new database and analytics capabilities, which I’m sure are important to the folks who needed those features, feel like piling on to an already feature-rich set of products.

It’s not unlike Microsoft Word over the years: It’s a perfectly fine word processor, so the only way to really improve it was to lob on new feature after new feature to make it relevant to an ever wider or more granular audience.

Source link

Continue Reading

Trending