A Nigerian regulator has fined Meta $220 million, saying the privacy policy of the company’s WhatsApp violated the country’s data and privacy laws.
The Federal Competition and Consumer Protection Commission (FCCPC) said in a Friday (July 19) press release that the company violated the Federal Competition and Consumer Protection Act (FCCPA) 2018, the Nigeria Data Protection Regulation 2019 (NDPR) and other relevant laws.
The FCCPC said in the release that the violations include “abusive and invasive practices against data subjects/consumers in Nigeria, such as appropriating personal data or information without consent, discriminatory practices against Nigerian data subjects/consumers or disparate treatment of consumers/data subjects compared with other jurisdictions with similar regulatory frameworks, abuse of dominant market position by forcing unscrupulous, exploitative and non-compliant privacy policies which appropriated consumer personal information without the option or opportunity to self-determine or otherwise withhold or provide consent to the gathering, use and/or sharing of such personal data.”
Responding to the announcement of the fine, a WhatsApp spokesperson provided Bloomberg with a statement saying: “In 2021 we went to users globally to explain how talking to businesses among other things would work and while there was a lot of confusion then, it’s actually proven quite popular. We disagree with the decision today as well as the fine and we are appealing the decision.”
This announcement comes a day after Meta told Reuters that it will suspend its generative artificial intelligence (AI) tools in Brazil after one of the country’s regulators objected to part of the company’s privacy policy.
Earlier in July, Brazil’s National Data Protection Authority suspended the validity of Meta’s new privacy policy, saying the company would have to exclude the section about the processing of personal data for generative AI training.
About a month before that, on June 10, Meta said in an update to an earlier blog post that it paused its planned launch of its AI assistant, Meta AI, in Europe after the Irish Data Protection Commission, on behalf of the European data protection authorities, asked it to delay training its large language models using content shared by adults on Meta’s Facebook and Instagram platforms.