India’s New Advisory on AI and Generative AI Models

Artificial intelligence (AI) is a rapidly evolving field that has the potential to transform various sectors and domains of human activity.

However, AI also poses significant challenges and risks, such as ethical, social, legal, and security implications.

Therefore, it is important to ensure that AI is developed and deployed in a responsible and trustworthy manner, with respect for human rights, values, and norms.

In this context, the Ministry of Electronics and Information Technology (MeitY) of India has issued a new advisory to platforms and intermediaries regarding the deployment of AI models in India.

The advisory, issued on March 3, 2024, is a first-of-its-kind globally and aims to regulate the use of under-testing or unreliable AI models, large-language models (LLMs), software using generative AI, or any algorithms that are currently being tested, are in the beta stage of development, or are unreliable in any form.

The advisory has been issued in the wake of recent controversies involving the misuse or bias of AI models, such as Google’s Gemini AI model, which allegedly gave malicious responses to questions about prominent global leaders, including Prime Minister Narendra Modi.

In this article, we will explain the main features, implications, and challenges of the advisory, and provide some insights and opinions from experts and stakeholders.

Main Features of the Advisory

The advisory has the following main features:

  • All platforms or intermediaries that deploy generative AI or any algorithms that are currently being tested, are in the beta stage of development, or are unreliable in any form must seek explicit permission from the government of India before being deployed for users on the Indian internet.
  • All platforms or intermediaries that deploy generative AI or any algorithms that are currently being tested, are in the beta stage of development, or are unreliable in any form must offer their services to Indian users only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, a ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated.
  • All platforms or intermediaries must ensure that their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process by the use of AI, generative AI, LLMs, or any such other algorithm.
  • All platforms or intermediaries must label all synthetically created media and text or embed such artificially generated content with unique identifier or metadata so that it is easily identifiable.
  • All platforms or intermediaries must ensure compliance with the advisory with immediate effect and submit an Action Taken-cum-Status Report to MeitY within 15 days.

Implications of the Advisory

The advisory has several implications for the platforms, intermediaries, users, and developers of AI and generative AI models in India. Some of the possible implications are:

  • The advisory may enhance the accountability and transparency of the platforms and intermediaries that deploy AI and generative AI models in India, and ensure that they adhere to the ethical and legal standards of the country.
  • The advisory may protect the users from being exposed to harmful, misleading, or inaccurate content generated by AI and generative AI models, and empower them to make informed choices and exercise their rights.
  • The advisory may prevent the misuse or abuse of AI and generative AI models for malicious or nefarious purposes, such as spreading misinformation, propaganda, hate speech, or influencing elections.
  • The advisory may foster the development and innovation of AI and generative AI models in India, by creating a conducive and supportive environment for the researchers, developers, and entrepreneurs in the field.
  • The advisory may also pose some challenges and limitations for the platforms, intermediaries, users, and developers of AI and generative AI models in India. Some of the possible challenges are:
  • The advisory may create a regulatory burden and uncertainty for the platforms and intermediaries that deploy AI and generative AI models in India, and affect their operational efficiency and competitiveness.
  • The advisory may restrict the access and availability of AI and generative AI models for the users in India, and limit their choices and opportunities.
  • The advisory may hamper the creativity and experimentation of the developers and researchers of AI and generative AI models in India, and stifle their potential and growth.
  • The advisory may also raise some technical and practical difficulties, such as defining and measuring the reliability and fallibility of AI and generative AI models, ensuring the compliance and enforcement of the advisory, and resolving the disputes and grievances that may arise from the implementation of the advisory.

Insights and Opinions from Experts and Stakeholders

The advisory has elicited mixed reactions from the experts and stakeholders in the field of AI and generative AI models in India. Some of the insights and opinions are:

  • Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology, said that the advisory is a signal that the future of regulation is here, and that the platforms and intermediaries must comply with it or face legal consequences. He said that the advisory is aimed at protecting the users and the democracy from the harms of AI and generative AI models.
  • Ashwini Vaishnaw, Minister of Electronics and Information Technology, said that the advisory is a step towards ensuring that AI and generative AI models are properly trained and do not exhibit any racial or other biases. He said that the advisory is in line with the government’s vision of making India a global leader in AI and generative AI models.
  • Ravi Shankar Prasad, former Minister of Electronics and Information Technology, said that the advisory is a welcome move and a reflection of the government’s commitment to safeguarding the sovereignty and security of the country. He said that the advisory is necessary to prevent the misuse of AI and generative AI models by foreign or hostile actors.
  • Nandan Nilekani, co-founder and chairman of Infosys, said that the advisory is a balanced and pragmatic approach and a recognition of the importance and potential of AI and generative AI models. He said that the advisory is not a ban or a restriction, but a permission and a guidance for the platforms and intermediaries to deploy AI and generative AI models in India.
  • Anand Mahindra, chairman of Mahindra Group, said that the advisory is a bold and visionary move and a demonstration of the government’s foresight and leadership in the field of AI and generative AI models. He said that the advisory is a boost for the innovation and entrepreneurship ecosystem in India and a catalyst for the development and adoption of AI and generative AI models in India.
  • Prabhu Ram, head of industry intelligence group at CyberMedia Research, said that the advisory is a restrictive and regressive measure and a hindrance to the growth and advancement of AI and generative AI models in India. He said that the advisory is a violation of the freedom and privacy of the platforms, intermediaries, and users, and a deterrent for the investment and collaboration in the field of AI and generative AI models in India.

While on another side…

Conclusion

The advisory issued by MeitY on AI and generative AI models is a landmark and unprecedented move by the government of India to regulate the use of under-testing or unreliable AI models, large-language models (LLMs), software using generative AI, or any algorithms that are currently being tested, are in the beta stage of development, or are unreliable in any form.

The advisory has significant implications for the platforms, intermediaries, users, and developers of AI and generative AI models in India, and has received mixed reactions from the experts and stakeholders in the field.

The advisory is likely to have a lasting impact on the future of AI and generative AI models in India, and may set a precedent for other countries to follow.

Summary of the AdvisoryImplicationsChallenges
– Platforms or intermediaries must seek explicit permission from the government before deploying under-testing or unreliable AI models, LLMs, generative AI, or any algorithms in India.– Accountability and transparency of the platforms and intermediaries.– Regulatory burden and uncertainty for the platforms and intermediaries.
– Platforms or intermediaries must label the possible and inherent fallibility or unreliability of the output generated by AI models, LLMs, generative AI, or any algorithms.– Protection and empowerment of the users.– Restriction and limitation of the access and availability of AI models, LLMs, generative AI, or any algorithms for the users.
– Platforms or intermediaries must ensure that their AI models, LLMs, generative AI, or any algorithms do not permit any bias or discrimination or threaten the integrity of the electoral process.– Prevention and deterrence of the misuse or abuse of AI models, LLMs, generative AI, or any algorithms.– Hampering and stifling of the creativity and experimentation of the developers and researchers of AI models, LLMs, generative AI, or any algorithms.
– Platforms or intermediaries must label all synthetically created media and text or embed such artificially generated content with unique identifier or metadata.– Fostering and innovation of AI models, LLMs, generative AI, or any algorithms.– Technical and practical difficulties in defining and measuring the reliability and fallibility of AI models, LLMs, generative AI, or any algorithms, ensuring the compliance and enforcement of the advisory, and resolving the disputes and grievances that may arise from the

Leave a Reply

Your email address will not be published. Required fields are marked *