Banner
Workflow

Will AI tools help detect telecom fraud?

Contact Counsellor

Will AI tools help detect telecom fraud?

  • To weed out cases of fraudulent SIM cards used for financial & other cyber scams, the Department of Telecommunications (DoT) has begun using AI-based facial recognition tool named ‘Artificial Intelligence and Facial Recognition powered Solution for Telecom SIM Subscriber Verification’ (or ASTR).
  • While DoT has put out success stories of fake busts using ASTR, there is no personal data protection regime or AI-specific regulation in India yet.

Why is artificial intelligence being used to detect telecom frauds?

  • India is the second-largest telecom ecosystem in the world, with about 117 crore subscribers.
  • It aims to use ASTR to analyse the whole subscriber base of all telecom service providers (TSPs).
  • The currently available conventional text-based analysis is limited to finding similarities between the proof of identities or addresses and verifying whether such information is accurate but it cannot trawl photographic data to detect similar faces.

What is ASTR and how will it detect fake SIM connections?

  • Facial recognition is an algorithm-based technology which creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against the database to which it has access.
  • Work
    • Step 1: Detection- Facial recognition technology (FRT) relies on the use of algorithms to detect the presence of a face in an image, video or real time footage.
    • Step 2: Analysis- FRT analyses the image of face, mapping facial geometry & an individual’s facial features to create a faceprint. Facial feature extraction uses mathematical representations of distinctive features on individual faces to have unique identifiers between different faces.
    • Step 3: Recognition- At this stage, it automatically cross-references a person’s facial features with a pre-existing database of images called a gallery dataset.

Concerns associated with the use of facial recognition AI

  • Inaccuracy (technical): Technical errors due to occlusion (a partial or complete obstruction of the image), bad lighting, facial expression, ageing and so on
  • Misidentification and Underrepresentation: Errors in FRT also occur due to the underrepresentation of certain groups of people in the datasets it uses for training. Studies on FRT systems in India indicate a disparity in error rates based on the identification of Indian men and Indian women. Globally, research has revealed that its accuracy rates fall starkly based on race, gender, skin colour and so on.
  • Privacy and Consent: Sometimes, individuals may not have consented (or not be aware) to their facial data being used or have control over the extent of its processing by public and private players.
  • Ethical concerns: These relate to privacy, consent, and mass surveillance.
  • Other privacy concerns: In many cases, an individual may not be in control of the processing of their data or not even be aware. This could & has resulted in wrongful arrests and exclusion from social security schemes.
  • No public notification about the use of ASTR on user data: An RTI filed by the publication with the DoT did not reveal any information about how ASTR safeguards data or for how long it retains customer data.

Legal framework governing such technology in India

  • In India, there is no data protection law in place after the government withdrew the Personal Data Protection Bill, 2019 last year following extensive changes recommended by a Joint Parliamentary Committee.
  • Secondly, India also does not have an FRT-specific regulation.
    • While there is a regulatory vacuum, there are > 130 government- authorised FRT projects in the country that spread access use cases such as authentication for access to official schemes, airport check-in (the DigiYatra project), real-time use on suspects etc.

Conclusion

  • NITI Aayog has published several papers enunciating India’s national strategy towards harnessing the potential of AI reasonably.
  • It says that the use should be with due consent and should be voluntary and at no time should FRT become mandatory.
  • It should be limited to instances where both public interest and constitutional morality can be in sync.
  • Enhanced efficiency of automation should per se not be deemed enough to justify the usage of FRT.
  • It is yet to be seen whether ASTR aligns with this definition and the Puttaswamy judgment of 2017.

Categories