Center scraps requirement to seek its nod before launching ‘untested’ AI
- After facing intense backlash on its first advisory on election integrity for artificial intelligence (AI) platforms, the IT Ministry has amended it.
Key Highlights
- “Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models
- Should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated
- In its initial advisory issued to online intermediaries like Meta and Google earlier this month
- The government has said that companies will have to seek its “explicit permission” before launching untested AI systems in India.
- While the government had earlier clarified that the advisory would not apply to AI start-ups but to “large” platform
- The requirement to seek its nod now has been dropped altogether.
- The first advisory was criticized by some startups in the generative AI space, including those invested in The ecosystem abroad, over fears of regulatory overreach of the yet nascent industry by the Indian government.
- similar to the earlier advisory — has been sent as “due diligence” measures that online intermediaries need to follow under the current Information Technology Rules, 2021.
- Though the advisories are not legally binding, questions were raised on the legal basis –
- Under which law the government can issue guidelines to generative AI companies since India’s current technology laws do not directly cover large language models.
Prelims Takeaway
- Information Technology Rules, 2021
- AI