AI Platforms Need To Seek Govt's Permission Before Any Launch: Union Min Rajeev Chandrasekhar
The govt has advised AI-generated content to be labelled or embedded with permanent unique metadata or identifier to be able to determine the original creator of any misinformation or a deepfake.
The Indian government is trying to create a system of checks and balances to ensure the right use of Artificial Intelligence (AI) in the country. The government has now asked AI platforms to seek its permission before launching any AI product in the nation. Union Minister Rajeev Chandrasekhar on Saturday told news agency ANI that all intermediaries have been asked to ensure compliance with the advisory, which was issued on March 1 evening. This comes into effect immediately.
The intermediaries also need to submit an action taken-cum-status report to the ministry within 15 days, added the Union Minister. He also said, "This signals that we are moving to a regime when a lot of rigour is needed before a product is launched. You don't do that with cars or microprocessors. Why is that for such a transformative tech like AI there are no guardrails between what is in the lab and what goes out to the public."
ANI quoted Chandrasekhar as saying, "Yesterday, we issued a second advisory. This advisory helps platforms to be a lot more disciplined about taking their AI models on platforms from the lab directly to the market. We don't want that to happen without guardrails, and disclaimers in place, so that the consumer knows what is unreliable."
#WATCH | On 2nd advisory on AI under new IT rules, Union Electronics & Technology Minister Rajeev Chandrasekhar says, "Yesterday, we issued a second advisory...This advisory helps platforms to be a lot more disciplined about taking their AI models on platforms from the lab… pic.twitter.com/PHlxwGG7T9
— ANI (@ANI) March 2, 2024
He also said that the Indian government has advised that AI-generated content should be labelled from now on or embedded with a permanent unique metadata or identifier to be able to determine the creator or the first originator of any misinformation or a deepfake. He said, "If they want to deploy a model that is error-prone, they have to label it as under testing, take government permission and explicitly seek confirmation and consent of the user that it is an error-prone platform. They can't come back later and say it is under testing."