top of page
Search

Technology in Medicine : The Legal Unraveling of Deepfakes in Healthcare

  • Writer: Chiransh Gulati
    Chiransh Gulati
  • May 12
  • 10 min read

Updated: May 13


Imagine waking up to a video of a renowned senior diabetologist - white coats, familiar voice - confidently assuring viewers that an oral drug can cure diabetes in under 48 hours. For millions burdened by the relentless routine of glucose monitoring and medication, the message felt real like a breakthrough: hope shone amongst them. But that hope had short lived. The video was not just misleading but also fabricated. It was a Deepfake. A sophisticated AI- generated forgery, created to endorse an unapproved pharmaceutical product while blatantly infringing upon the physician’s personality rights and exploiting the public trust.


Deepfakes in the healthcare industry refer to the highly realistic synthetic videos, images or audio that are disseminated through social media.1 These deepfakes often feature the voices of doctors or AI - bot-made familiar doctor figures in white coats. They have emerged as the biggest threat because of the industry's resilience in phone calls and verbal communication. The vulnerability not only endangers the integrity of a patient-doctor confidentiality relationship but also jeopardizes patient trust in seeking treatment.


Deepfakes have become a significant crisis in the healthcare sector in India, with notable videos featuring prominent figures such as Dr. Trehan and Dr. Shetty. India has recorded the second highest number of deepfake attacks globally, following the U.S., with 7.7% of these incidents occurring in the healthcare industry. A recent survey found that 24% of the global healthcare industry has been affected by deep fake fraud, leading to losses of up to $250,000.2


How Harm Is Caused By Deep Fakes In Healthcare and the legal provision protecting the threat :


Identity Theft:

Deepfakes tend to copy healthcare providers and patients in order to grab unauthorized medical records. It can also be done by delivering false or misleading diagnoses. The deepfake technology had tampered various CT scans in 2020 in order to misguide the patients.3

The Deepfake technology developed from the Artificial Intelligence Bot system poses a thriving threat to the medical sector by enabling identity theft at an unrecognisable rate. These AI generated fabrications of the identity by replicating the voice or credentials of healthcare providers and patients to unlawfully authorized electronic medical records, impersonate professionals, or even provide patients misleading diagnosis. A stark example is from the study of 2020,4 when researchers demonstrated how CT scans could be tampered with using deepfakes technology to insert or remove the evidence of medical illness, which is a biggest red flag for the radiology branch and patients safety.


From a legal standpoint, Section 66C of the Information Technology Act, 2000 5 directly penalizes such acts of digital impersonation, prescribing punishment for up to three years of imprisonment and fine up to Rs.1 lakh for fraudulently using another person’s unique identification features. Furthermore, Section 338 of the Bharatiya Nyaya Sanhita(BNS)6 may also apply, covering the offence of forgery with the intent to harm a person’s reputation, a prominent provision when deep fakes tend to implicate or discredit healthcare professionals.


Moreover, such a misuse of personal likeness and sensitive medical information also violated a fundamental right to Privacy, as enshrined under Article 21 of the Constitution of India 7 and affirmed in the landmark case of Justice K.S. Puttaswamy v. Union of India 8. Thus, making use of this unauthorised digital manipulation of a person’s identity is a serious breach of a fundamental right of patients.


Misinformation Leading To Medical Scams:

From leading false drug endorsements to fake consultation fraud to viral misinformation, campaigns are seen as a deadly weapon to the health sector, particularly witnessed during COVID-19. The battle started with Deepfakes impersonating reputable doctors and promoting unapproved or counterfeit medicines through social media, which further resulted in patients suffering from side effects of the advertised fake medicines through Deepfake. Additionally, deepfakes have been used to create misleading public service announcements that appear to come from government health officials, pushing agendas that promote the mass purchase of fraudulent drugs or anti-vaccine campaigns. These deceptive messages were widely circulated on platforms like WhatsApp, Instagram, and Telegram during the pandemic, discouraging people from getting vaccinated. 9


There has been a recent case Dr Trehan 10 - a well -known cardiac surgeon, where a deepfake video was circulating on social media in which he was seen giving medical advice and natural remedies to cure urological problems. The AI Bot had created a video by using voice - over and Photoshop technique and also putting light on his hospital chain “MEDANTA” in the video. It was viewed by over 1.1 million viewers. 11 This case of deepfake being a threat to such a reputable doctor was referred to the Delhi High Court. In this case, Global health Limited & Anr v. John Doe, the court had passed the John doe order in order to protect the reputation of the doctor and put a stop on the misleading information from all the social media sites.


The circulation of misleading deepfake videos featuring healthcare professionals is not just a technological concern—it’s a serious legal and ethical issue. When such videos falsely depict doctors endorsing unapproved treatments or cures, it directly harms their professional reputation. This amounts to defamation, punishable under Section 356 of the Bharatiya Nyaya Sanhita (BNS) 12, which provides for up to two years of imprisonment, a fine as compensation to the aggrieved doctor, or both.


To address the spread of such harmful misinformation online, Rule 3(1)(b)(v) of the Intermediary Guidelines and Digital Media Ethics Code Rules, 2021 13 places a duty on platforms and intermediaries to exercise due diligence. They are required to make reasonable efforts to prevent users from uploading or sharing content that is knowingly false or misleading. Additionally, Section 79 of the Information Technology Act, 2000 14 provides conditional safe harbor to intermediaries, which they can retain only if they act swiftly to remove such content once notified by the government or its authorized agency.


When deepfake videos are created by editing genuine recordings of medical experts without their permission, it also violates their moral rights under Section 57 of the Copyright Act, 1957 15—specifically their right to be identified correctly and to prevent distortion of their work or likeness. Furthermore, if these videos falsely use the name, logo, or branding of pharmaceutical companies to promote or sell dubious drugs, it can constitute trademark infringement under Section 29 of the Trade Marks Act, 1999.16


Personality Rights Of The Medical Experts:

Personality rights are a new type of intellectual property (IP) that have emerged alongside advancing technology. This form of IP protects an individual's reputation, name, image, popularity, and other aspects of their identity from potential harm caused by artificial intelligence. In India, the significance of personality rights has gained attention through legal cases filed by celebrities such as Amitabh Bachchan 17, Anil Kapoor 18 and Karan Johar 19. Additionally, the concept of personality rights has been highlighted in the healthcare sector, particularly with cases involving Dr. Devi Prasad Shetty and Dr. Naresh Trehan, where deepfake videos featuring these renowned doctors had significant repercussions in the industry.


In the case of Dr. Devi Prasad Shetty v. Medicine Me & Ors. 20, the Delhi High Court addressed the issue of personality rights in the healthcare sector in 2024. Dr. Shetty, the founder of Narayana Health and a well-known figure in healthcare, was the subject of a Facebook page that was misusing his voice, name, and likeness by posting fake AI-generated videos to promote a pain relief oil for joints. This tactic was employed by the defendants as a marketing strategy. The High Court ruled that Dr. Shetty qualified as a “personality” based on the personality rights test established in the Arijit Singh case. Consequently, an interim injunction was granted, restraining the defendants from misusing his name, voice cloning technology, and image without his consent.


The criteria set out for personality right test in Arijit Singh v. Codible Ventures & Ors.21 was :

The plaintiff must be established as a celebrity, use by the defendant must be for commercial gain, and the plaintiff must be identifiable from the defendant's misuse.


The most recent incident that caused significant concern was a deepfake video of Dr. Naresh Trehan, the founder of Medanta Hospital. In the video, he appeared to promote natural remedies for urology-related illnesses on social media platforms. The video went viral, garnering over a million views. This situation was critical because the impersonation of a respected doctor easily influenced many people. Recognizing the urgency of the issue, the Delhi High Court issued a John Doe order to have the viral deepfake videos removed. The court also instructed intermediaries under Section 79 of the IT Act 22 to remove any infringing content on their platforms.


To take note of the personality rights of influential individuals in public healthcare, it is essential to recognize how these rights can significantly impact their reputation. The healthcare sector must acknowledge the importance of emerging technologies to safeguard a healthcare worker's popularity, name, image, and reputation, while also respecting their right to privacy.


Infringement Of Intellectual Property Rights Due To DeepFake :

Deepfakes created using videos of medical professionals without consent can amount to copyright infringement under Section 13(1) of the Copyright Act 23, 1957, which protects original cinematographic works and sound recordings. When someone edits, alters, or manipulates a video featuring a medical expert—say, to falsely promote a drug—it may be considered the creation of a derivative work. Without authorization, this violates the exclusive rights held by the original copyright owner.


Moreover, Section 52 of the Act 24 outlines the exceptions under the doctrine of “fair dealing.” While it allows certain uses of copyrighted material (such as for criticism, reporting, or research), the malicious creation of deepfakes to deceive patients or the public does not qualify under this exception. Deepfakes designed to spread misinformation or promote unapproved medical products clearly fall outside the bounds of fair use.

This issue becomes even more pressing in healthcare, where such manipulated content can mislead vulnerable patients, damage the credibility of medical professionals, and compromise public trust in science. This intersects with Section 79 of the IT Act, which grants conditional immunity to intermediaries. As reaffirmed in MySpace Inc. v. Super Cassettes Industries Ltd.25 platforms must remove infringing content upon notice to retain protection.


Another issue arising under intellectual property is trademark infringement, as exemplified in the case of Dr. Naresh Trehan, the founder of "Medanta," which is a registered trademark. "Medanta" is a well-known mark in the healthcare industry and is recognized across the country for its medical services. In a deepfake video designed to appear more authentic, the trademark was featured prominently, which contributed to its broader reach. As explained in Section 29 of the Trademark Act of 1999 26, the infringement of a registered trademark occurs when the mark is used without consent, taking unfair advantage of the trademark's reputation, and causing confusion among the public. This scenario constitutes a legitimate case of trademark infringement that should be addressed under the statute.


Additionally, in the healthcare sector, deepfake videos and images are being created to promote drugs. This practice not only harms patients but also infringes on intellectual property by misusing a reputable trademark to lend credibility to these videos. Therefore, to address these challenges more effectively, amendments to the law are necessary. While India has taken a distinctive approach to handling cases related to artificial intelligence, it is clear that laws must evolve to keep pace with emerging technologies.


Solution to the problem:

While existing laws regulate the harm caused by deepfakes, it is essential for regulatory bodies to take proactive measures to prevent deepfakes from causing any harm in the first place. The Indian legal system addresses deepfakes through various statutes, including information technology laws, criminal laws, and copyright laws. However, as technology continues to advance rapidly, there is an urgent need to develop a dedicated set of laws for artificial intelligence, given that deepfakes are a byproduct of this technology. Introducing separate legislation focused on AI would provide a more targeted legal framework to address emerging challenges such as deepfakes.27


Further, the Indian Medical Association (IMA) has informed the medical community about guidelines concerning deepfake technology.28 The association has outlined recommendations for doctors on how to identify deepfakes and combat such attacks. Furthermore, it is the legislature's responsibility to establish laws to prevent the misuse of deepfake technology and curb the spread of misinformation it can cause.


The Digital Personal Data Protection Act, 2023 29 addresses issues related to an individual's consent, particularly regarding the use of their likeness to create AI-generated videos, images, or audio. Additionally, the Act includes a redressal mechanism that allows individuals to seek compensation in cases of identity misuse and to halt the spread of harmful content. However, while the Act addresses the consequences of technologies like deepfakes after harm has occurred, it does not provide measures to prevent such misuse beforehand. Therefore, the legislature needs to take note of emerging technologies before they become widespread and implement regulations to mitigate potential harm.


Conclusion:

The healthcare sector, built on the fundamentals of the patient's trust in the doctors, should be withheld, and technology should not be able to dismantle the sector's fundamentals. As the synthetic media grows, it is causing the line to blur from reality to fabrication by placing public health at risk, prominently visible in Dr Trehan's case, as well as in the marketing of pharmaceutical drugs through deepfakes.

Thus, as discussed above, the Indian legal system needs a nuanced and forward-looking regulatory framework that addresses issues like misinformation, personality rights, and other major rights infringing by AI tools like deepfakes. By enacting dedicated legislation, fortifying existing laws, and mandating greater accountability and transparency, India can protect the dignity of its healthcare professionals and the welfare of its citizens.

Inaction is no longer an option; the cost of delay could be measured in lost trust and lives.


References:

  1.   Pindrop. “Understanding the Threat of Deepfakes in Healthcare | Pindrop,” April 25, 2025.

  2. Regula. “The Impact of Deepfake Fraud: Risks, Solutions, and Global Trends,” March 25, 2025.

  3. Bates, Andree, and Andree Bates. “How to Spot and Prevent Deepfakes Spreading Medical Misinformation - Eularis.” Eularis - Accelerating Business Growth (blog), May 25, 2024. 

  4. Pindrop. “Understanding the Threat of Deepfakes in Healthcare | Pindrop,” April 25, 2025. 

  5.  Information Technology Act, 2000

  6. The Bharatiya Nyaya Sanhita, 2023

  7. The Constitution of India, 1950

  8. Justice K.S. Puttaswamy v. Union of India. AIR 2017 SC (CIV) 2714

  9. Desai, Vivek. “Why Medical Deepfakes Are the New Public Health Crisis — Healthcare Executive.” Healthcare Executive, February 3, 2025. 

  10. Global Health Limited & Anr v John Doe & Ors. CS(COMM) 6/2025

  11. WTR. “Delhi High Court Takes Strict Approach to Personality Rights Violation in Healthcare Industry Amid Spike in Use of AI and Deepfakes,"

  12. The Bharatiya Nyaya Sanhita, 2023

  13. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code Rules, 2021

  14. The Information Technology Act, 2000

  15.  The Copyright Act, 1957

  16. The Trade Marks Act, 1999

  17. Amitabh Bachchan v. Rajat Nagi CS (COMM) 819/2022

  18. Anil Kapoor v. Simply Life India & Ors. CS (COMM) 652/2023

  19. Karan Johar v. Indian Prode Advisory & Ors. IA (L) No. 17865/2024

  20.  Dr. Devi Prasad Shetty v. Medicine Me & Ors. (CS(COMM) 1053/2024)

  21. Arijit Singh v. Codible Ventures & Ors. (2024 SCC Online Bom 2445) 

  22. The Information Technology Act, 2000

  23. The Copyright Act, 1957

  24. The Copyright Act, 1957

  25.  MySpace Inc. v. Super Cassettes Industries Ltd. [236 (2017) DLT 478]

  26.  The Trade Marks Act, 1999

  27. A, Sindhu, WIPO, Jackie Snow, Cereproc, Luke Kemp, Oscar Schwartz, Regina Mihindukulasuriya, and Tiffany C LI. Interventions on the Issue of Deepfakes in Copyright. Uploaded by Sindhu A, 2021.

  28.  My Blog. “Deepfakes: A Double-Edged Sword in India’s Healthcare Landscape - Enira Consulting.” Enira Consulting - (blog), January 15, 2024. 

  29. The Digital Personal Data Protection Act, 2023





 
 
 

Recent Posts

See All

Comments


Don't be a Stranger,
Let's Connect 

  • Instagram
  • LinkedIn

Quick Links

About

Legal Case Studies

Technology

bottom of page