Meta has pointed out important difficulties with the recent changes to India’s IT Rules from MeitY – changes which cut the time for removing illegal material from 36 hours down to only three. While leaders at the company say they agree with the aim of stopping damaging things from quickly becoming widespread, they also caution that a three-hour limit could make proper process and real-world application of the rules hard to manage.
What the IT Rules changes are
The changes make the amount of time intermediaries have to follow rules under Section 79 of the IT Act tighter; platforms must now take down unlawful material within three hours of getting an order from the government. MeitY explains that the shorter time shows how quickly deepfakes and misleading material can be shared. The updates to the rules also add new requirements relating to artificial intelligence.
They make AI-created content legally subject to rules – with demands for labelling – and put more responsibility on both users and platforms to lessen risks from synthetic media. As well as the three-hour requirement, the rules speed up other time limits. Platforms now have seven days – instead of fifteen – to deal with users’ complaints. Images of a private, sexual nature which were not given permission to be shown must be removed in two hours, down from 24, showing a need for quicker help in cases of serious harm. The changes are to come into force on February 20th.
Meta’s view of the three-hour takedown requirement
Meta people in charge say that, in principle, they are on the same side as the government’s safety goals, but think the reduced time is difficult to put into practice on a large scale. As one senior person at the company said, understanding the reason for concern about speed of spread is fair, but decisions need careful looking at before any action is taken. The company said that each order from the government must be looked at, investigated, and confirmed. This process can involve several teams from policy, legal, and operations – and also getting information about the material in question. They warned that doing this in three hours would be very difficult in many situations.
How possible it is to work with the rules, and proper process, under the shorter time limits
Putting a three-hour standard into practice affects everything from how many staff are needed and shift arrangements, to automatic finding systems and how to deal with worsening situations. Intermediaries must check that orders are real, understand what they cover, and make sure that the material shown does not break Indian law or the platform’s own rules. Mistakes have real results.
Taking down lawful material damages users’ rights and trust in the public; leaving unlawful material up invites harm and the chance of being punished by regulators. The company stressed that rules which work and can be predicted are essential, saying that the draft rules for artificially created information did not include a three-hour requirement.
What the requirement means for intermediaries, content from other countries, and data transfers
The requirement affects worldwide platforms which hold material made and shared across national borders. Deciding which country has control, finding copies or re-uploads, and dealing with material which has been posted on more than one site can be technically hard. Going quickly through this analysis may lead to too much material being taken down, or inconsistent application of the rules.
How data is stored and the structure of networks adds another level of difficulty. Services like Facebook, Instagram, and WhatsApp depend on systems which are spread out for reliability and quickness. Strict keeping of data within a country, or takedowns done in a hurry, can go against how content is kept in memory, copied, and given to people in different areas, making it hard to meet the time limit.
Smaller intermediaries may feel the pressure more strongly. Unlike large platforms which have well-developed trust and safety operations, new services may not have teams which respond 24 hours a day, 7 days a week, or the automatic systems needed to meet a three-hour limit for everything. Groups in the industry are getting ready to ask for flexibility in technically hard cases.
AI, deepfakes, and the balance between speed and accuracy
Deepfakes make a strong case for needing to act quickly. Harmful synthetic media can grow hugely in minutes, and need fast action. AI can help by finding changed media before people do, and putting reports in order of importance more quickly than people can, especially for known types of misuse.
However, AI is not a perfect answer. Automatic systems can give false positives, wrongly judge what is meant, or miss new ways of attacking. Platforms say they use ‘red teaming’, model cards, and continuing checking to make accuracy better. Even then, legal decisions which need careful thought often need people to look at them – and this does not easily fit a three-hour limit.
What the industry wants next: clear rules, flexibility, and limits
People in charge across the sector are asking for clarity about what is covered and how the rules work. Key requests include a standard form for orders, safe and reliable ways of getting them, ways of checking they have been received, and rules for counting time for nights, weekends, and public holidays. Clear definitions lessen uncertainty and speed up lawful following of the rules.
Flexible ways of dealing with hard cases are also being discussed. Those involved want advice on actions in stages, like immediately putting in place temporary limits, followed by confirmed takedowns, and safe areas for quick, good-faith efforts. Guides for content from other countries and finding copies would further lessen friction. Transparency can balance speed with protecting rights. Companies suggest thoroughly recording orders, ways of appealing, and regular public reports to show how well they are doing and how often they make mistakes.
A short time to get ready, joint exercises with MeitY, and continuing talking would help to get expectations in line before full application of the rules begins. The new IT Rules mark a clear move towards quicker, AI-aware control of content. Meta’s response shows the main problem at the heart of modern platform regulation: move quickly enough to lessen harm, without losing accuracy, proper process, or users’ rights. Getting this balance will likely need clear rules, practical flexibility, and continuing cooperation.






