"Fake Doctors on TikTok: How AI Deepfakes Are Spreading Health Misinformation"
Social media platforms, particularly TikTok, are facing a growing concern over the spread of health misinformation due to AI-generated deepfake videos featuring real doctors. The phenomenon has been uncovered by fact-checking organization Full Fact, which found hundreds of such videos impersonating respected medical professionals and promoting unproven supplements.
The creation of these deepfakes involves manipulating real footage of doctors from the internet, reworking their audio to encourage viewers to buy products from a US-based supplements firm called Wellness Nest. The videos claim that certain supplements can alleviate symptoms associated with menopause, despite lacking scientific evidence to support such claims.
The impact on the affected individuals has been profound, with some experiencing feelings of irritation and frustration at being used as a prop for health misinformation. Prof David Taylor-Robinson, a specialist in children's health, was shocked to find that his image had been manipulated to endorse Wellness Nest products, despite having no connection to the firm.
TikTok took down the videos six weeks after Taylor-Robinson complained, citing their guidelines against harmful misinformation and behaviors such as impersonation. However, the incident highlights a broader issue of social media giants failing to adequately regulate AI-generated content.
The situation is not unique to TikTok, with similar deepfakes appearing on other platforms like X, Facebook, and YouTube. Wellness Nest has denied any affiliation with the AI-generated content, claiming it cannot control or monitor affiliates around the world.
As concerns over health misinformation grow, politicians are calling for action. Helen Morgan, Liberal Democrat health spokesperson, is advocating for clinically approved tools to detect and combat AI deepfakes posing as medical professionals.
The incident serves as a stark reminder of the need for social media companies to invest in robust content detection technologies and take swift action against harmful misinformation. The spread of AI-generated deepfake videos featuring real doctors poses a significant threat to public health, and it is essential that authorities and platforms work together to address this issue.
Social media platforms, particularly TikTok, are facing a growing concern over the spread of health misinformation due to AI-generated deepfake videos featuring real doctors. The phenomenon has been uncovered by fact-checking organization Full Fact, which found hundreds of such videos impersonating respected medical professionals and promoting unproven supplements.
The creation of these deepfakes involves manipulating real footage of doctors from the internet, reworking their audio to encourage viewers to buy products from a US-based supplements firm called Wellness Nest. The videos claim that certain supplements can alleviate symptoms associated with menopause, despite lacking scientific evidence to support such claims.
The impact on the affected individuals has been profound, with some experiencing feelings of irritation and frustration at being used as a prop for health misinformation. Prof David Taylor-Robinson, a specialist in children's health, was shocked to find that his image had been manipulated to endorse Wellness Nest products, despite having no connection to the firm.
TikTok took down the videos six weeks after Taylor-Robinson complained, citing their guidelines against harmful misinformation and behaviors such as impersonation. However, the incident highlights a broader issue of social media giants failing to adequately regulate AI-generated content.
The situation is not unique to TikTok, with similar deepfakes appearing on other platforms like X, Facebook, and YouTube. Wellness Nest has denied any affiliation with the AI-generated content, claiming it cannot control or monitor affiliates around the world.
As concerns over health misinformation grow, politicians are calling for action. Helen Morgan, Liberal Democrat health spokesperson, is advocating for clinically approved tools to detect and combat AI deepfakes posing as medical professionals.
The incident serves as a stark reminder of the need for social media companies to invest in robust content detection technologies and take swift action against harmful misinformation. The spread of AI-generated deepfake videos featuring real doctors poses a significant threat to public health, and it is essential that authorities and platforms work together to address this issue.