Policy Can Force AI Platforms to Nix Deepfakes Before Their Creation

Spread the love

Cyber fraud has evolved dramatically in recent years, shifting from simple email deceptions to incorporating sophisticated GenAI technologies to create deepfakes that can recreate the face, body movement, voice, and accent of the real thing. Lawsuits are being filed worldwide to block or bring down these deepfakes under copyright infringement laws. But using the courts to prevent this misuse is not enough; the state needs to enforce policy at the AI level.

Recently, a multinational company’s CFO was impersonated using deepfake technology in a video call. This AI-generated facade was employed to authorise the transfer of a significant sum, totalling nearly $25 million, into multiple local bank accounts. Initially suspicious, the employee was convinced after a video chat with what appeared to be her CFO and several co-workers, showcasing the dangerous persuasive power of deep-fakes to commit financial crimes.

Until recently, from synthesised music to edited images and voice assistants, such innovations, while artificial, often still bore the human touch — a critical aspect when ownership and responsibility have to be assigned for the synthetic output. Moreover, every element of enhancement was tedious and expensive and had human interfaces

However, generative AI (Gen AI) dramatically shifts this landscape, bypassing human touch and creating works almost entirely by software—the deepfake. These convincing falsifications are propelled by advanced AI tools, many accessible on the Internet, that blend machine learning and neural networks.

Deepfakes are generally fought on grounds of copyright and intellectual property infringement, or as a financial crime if it involves a financial transaction and digital impersonation.

Since such reactions are generally post facto, the harms caused by these deepfakes on reputational, financial, or societal levels have already been committed; they don’t prevent or reduce the number of deepfakes. Nor is there any deterrence to the AI platforms used as tools to create these deepfakes. Most infringements are limited to taking these deepfakes down on social media.

Detecting the criminal is difficult as most AI tools do not authenticate users. In a way, it’s like gun control. Access to these tools is available over the internet, most of the AI tools are free as they want users’ data and are not charging them yet. It’s as if anyone and everybody has access to guns; there is no need to buy the gun or the ammunition. The only way to control deepfakes is to monitor control of the AI engine and access to the data, similar to the guns and access to the ammunition.

One way to deter deepfakes is to make the platform the co-accused in every case of proven deepfakes. Second, public policy must make it compulsory for AI platforms to authenticate their users, and keep a record of them. Their created output has to have watermarks, and their digital and real identities need to be available for access to the law. This is already required for social media platforms accessed in India; they are also asked to have a nodal compliance officer. A similar kind of obligation should be required of all AI platforms.

It’s more important to control the data (ammunition) through ‘Right to Data’ laws to ensure that individuals can own their data. At the basic level, a video or an image is data; an AI engine has to scrape several thousand images to create a near-exact replica for a deep fake. If its ownership is clear, scraping data off the public internet will be discouraged; this may control deep fakes at the creation stage itself.

Under India’s Data Protection Act, however, the right to data ownership is not clearly defined. Currently, platforms own their users’ data; AI engines are free to scrape and use that data to build their models.

Until the ownership of the data is not decided by the individuals who have created it, legal rights of privacy or data protection cannot be fully applied. Ownership, distinguished from privacy and protection, has to be defined first — not only in the context of data linked directly to identity like in the case of privacy, but also ‘indirect’ digital footprints across the Net — images, video, and text created by any action of a user.

Privacy laws only define data that needs to be private it does not cover all data. In the case of AI platforms, all personal data and even non-personal data is important. The data may be non-personal but it can be combined with personal data to digitally impersonate users. The impersonation engine needs enough data to create videos like the one used for impersonating the CFO of the company in a video call.

The next step is to define consent for data use. The law for data fiduciaries and account aggregators clearly defines consent to use financial data. Why is such a definition of consent not applied to video data created and presented on social media? The default assumption should not be that consent is taken by the platform, and the user should not be forced to give consent as a sign-up default in the massive list of “terms and conditions” that nobody reads. The default has to be that without consent, has to be explicit, and no data can be used by the platform, or if it is stolen or scraped by an AI, the fiduciary responsibility to maintain its sanctity should be of the platform.

If an AI algorithm scrapes data from a platform, the platform should have the regulator-mandated fiduciary duty to restore the data to the user. Platforms cannot share user data with AI algorithms, whether internal or external, that are misusing the data for purposes other than what the data is given for.

Data, whether obtained by hook or by crook, is GenAI’s ammunition for its gun. If the legal rights for data are recognised and given to individuals, their agency over the data will be established ex-ante. The misuse of deepfakes will be countered at the creation stage itself.

 

Related posts

Leave a Comment

95 − = 89