pageview
Banner Default Image

Facebook London hiring spree focuses on tackling online harms

almost 5 years ago by Lucy Cinder

Facebook London hiring spree focuses on tackling online harms

Cyber Security

Social media behemoth Facebook is putting London at the centre of its evolving strategy to tackle online harms, with the creation of 500 jobs in the capital this year.

A significant number of jobs are focused on building artificial intelligence (AI) and machine learning tools to clamp down on harmful and malicious content and fake accounts on its social platforms.

Facebook’s Europe, Middle East and Africa (EMEA) vice-president, Nicola Mendelsohn, confirmed the hires at a London Tech Week event on 12 June. She said online safety was a top priority for Facebook, pointing to a substantial increase in its investment in this area in the past couple of years.

“These hundreds of new jobs demonstrate not only our commitment to the UK but our determination to do everything we can to keep Facebook safe and secure,” said Mendelsohn.

“Many of these roles will accelerate our artificial intelligence work in London as we continue developing technology to proactively detect and remove malicious content,” she added.

The recruitment spree will mean Facebook will employ more than 3,000 people in London by the end of 2019, with 1,800 of them in engineering roles in what is already its largest engineering centre outside of the US.

The specific focus of the organisation’s recruitment plans come at a tough time for Facebook, as it faces a continued storm of controversy over its use of data and attitude to privacy and online harms, both in the UK and elsewhere.

At the start of the year a PR agency-commissioned survey suggested that in light of issues surrounding privacy, data misuse and cyberbullying, 83% of British people now believe Facebook should be subject to government regulation.

The UK government has already moved to establish the UK as a world leader in fighting online harms. In April 2019 it published its Online Harms whitepaper, setting out the world’s first framework intended to hold firms such as Facebook accountable for the safety of their users.

The whitepaper proposes that technology firms are made to take steps to protect customers from threats including cyberbullying, disinformation and fake news, as well as outright illegal behaviours such as child sexual exploitation or terrorism.

Speaking at the time, prime minister Theresa May said internet companies such as Facebook had not done enough to protect their users.

“That is not good enough, and it is time to do things differently. We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe,” she said. “Online companies must start taking responsibility for their platforms, and help restore public trust in this technology.” 

Meanwhile, Facebook said it would not be removing a so-called deep fake video, a doctored video created using AI software, that manipulated existing videos and pictures of its CEO Mark Zuckerberg to make it appear he was saying things that he had not.

The video was first uploaded to Facebook’s photo-sharing platform, Instagram, earlier in June, and has since been viewed widely and was also shared on Facebook as well.

In the short piece of footage, which was actually created by a British artist to highlight how people can be manipulated by social media, Zuckerberg appeared to gloat about a conspiratorial and shadowy organisation being behind Facebook’s success.

source computerweekly

Industry: Cyber Security

Banner Default Image

Latest Jobs