If proven that an employee did not review the annexes of offer letter before signing, the employer could be fined Dh20,000 for submitting incorrect details to Mohre
life and living4 hours ago
British lawmakers grilled Facebook on Thursday over how it handles online safety as European countries move to rein in the power of social media companies, with the tech giant’s head of safety saying the company supports regulation and has no business interest in providing people with an “unsafe experience.”
Representatives from Google, Twitter and TikTok also were answering questions from a parliamentary committee scrutinizing the British government’s draft legislation to crack down on harmful online content. It comes days after the companies testified before American lawmakers and provided little firm commitment for US legislation bolstering protection of children from online harm, ranging from eating disorders, sexually explicit content and material promoting addictive drugs.
Governments on both sides of the Atlantic want tougher rules aimed at protecting social media users, especially younger ones, but the United Kingdom’s efforts are much further along. UK lawmakers are questioning researchers, journalists, tech executives and other experts for a report to the government on how to improve the final version of the online safety bill. The European Union also is working on digital rules.
Antigone Davis, Facebook’s head of global safety who addressed the British lawmakers via video conference, defended the company’s handling of internal research on how its Instagram photo-sharing platform can harm teens, including encouraging eating disorders or even suicide.
“Where does the buck stop?” asked Damian Collins, the lawmaker who chairs the committee.
“It’s a company filled with experts, and we all are working together to make these decisions,” Davis said. She added that “we have no business interest, no business interest at all, in providing people with a negative or unsafe experience.”
Davis said Facebook is largely supportive of the UK’s safety legislation and is interested in regulation that gives publicly elected officials the ability to hold the company accountable.
She said she doesn’t agree with critics that Facebook is amplifying hate, largely blaming societal issues and arguing that the company uses artificial intelligence to remove content that is divisive or polarizing.
“Did you say that Facebook doesn’t amplify hate?” Collins asked.
“Correct,” Davis said, adding, “I cannot say that we’ve never recommended something that you might consider hate. What I can say is that we have AI that’s designed to identify hate speech.”
She declined to say how much dangerous content those AI systems are able to detect.
Facebook whistleblower Frances Haugen told the UK committee this week that the company’s systems make online hate worse and that it has little incentive to fix the problem. She said time is running out to regulate social media companies that use artificial intelligence systems to determine what content people see.
Haugen was a Facebook data scientist who copied internal research documents and turned them over to the US Securities and Exchange Commission. They also were provided to a group of media outlets, including The Associated Press, which reported numerous stories about how Facebook prioritized profits over safety and hid its own research from investors and the public.
In one of several pointed exchanges Thursday before the parliamentary committee, Scottish lawmaker John Nicolson told Davis that “all this rather suggests that Facebook is an abuse facilitator that only reacts when you’re under threat, either from terrible publicity or from companies, like Apple, who threaten you financially.”
Lawmakers pressed Facebook to provide its data to independent researchers who can look at how its products could be harmful. Facebook has said it has privacy concerns about how such data would be shared.
“It’s not for Facebook to set parameters around the research,” said Collins, the committee chairman.
The UK’s online safety bill calls for a regulator to ensure tech companies comply with rules requiring them to remove dangerous or harmful content or face penalties worth up to 10% of annual global revenue.
British lawmakers are still grappling with thorny issues such as ensuring privacy and free speech and defining legal but harmful content, including online bullying and advocacy of self-harm. They’re also trying to get a handle on misinformation that flourishes on social media.
Representatives from Google and its YouTube video service who spoke to UK lawmakers Thursday urged changes to what they described as an overly broad definition of online harms. They also appeared virtually, and the tenor of lawmakers’ questions wasn’t as harsh as what Facebook faced.
If proven that an employee did not review the annexes of offer letter before signing, the employer could be fined Dh20,000 for submitting incorrect details to Mohre
life and living4 hours ago
All travel partners, from tour guides to bus operators, are women, creating a safe space for some 'me-time'
uae4 hours ago
Just one day after the alleged crime, the men illegally left the UAE by bypassing official checkpoints
crime4 hours ago
The pontiff opened his annual Christmas address to the Catholic cardinals with what appeared to be a reference to Israeli airstrikes on Friday
europe11 hours ago
Haaland, who was the Premier League's top scorer for the previous two seasons, has found the back of the net just twice in their last eight league games
football11 hours ago
The result leaves the Hammers 14th with 20 points after 17 games, while Brighton are ninth on 25
sports11 hours ago
Dubai Golden Visa Awardee shines with 16-Under-Par Performance in 90-Hole Shoot-Out
sports12 hours ago
Game Changers Falcons advanced to the final despite a loss earlier in the day
tennis12 hours ago