Karnataka Establishes Committee for Responsible AI Framework and Governance

The Karnataka government has formed a Committee on Responsible AI, to develop a system for the secure and moral application of AI in state and public services. Kris Gopalakrishnan is chairing the committee; it will concentrate on how AI is governed, dealing with risks, and protecting people - all in line with what is expected nationally and around the world.

Who is on the Committee and who leads it

Kris Gopalakrishnan leads the committee, and N. Manjula – the Secretary of the Department of Electronics, IT, Biotechnology and Science & Technology – is co-leading it. This combination of someone with experience in industry and someone with oversight of administration is meant to connect public policy with the technical side of designing things.

People on the committee are important people from business, schools, law, and policy making. Tech companies and institutions – IBM, Accenture, Wipro, Kyndryl, IIIT Bangalore, NASSCOM, and Sarvam AI – will offer their understanding of what AI can do and how it works. The many different kinds of people on the committee are meant to balance new ideas with what the public needs.

What the Committee has to do, what it will produce, and when

The committee will make a Responsible AI Policy and a plan for putting it into effect, designed for Karnataka. What it can do covers both the main ideas and what is actually done when the government buys, uses, and watches over AI tools that affect people.

People working for the government expect a first report in 60 days, and full suggestions in 90 days. What the committee will give will include a policy system, a way to rate the risks of AI applications the government uses, and a practical plan for putting in protections across all departments.

The main ideas and the areas of government the committee is looking at

Members talked about making the state’s rules match India’s national rules for how AI is governed, and the best ways of doing things around the world. The main ideas being considered are that AI should be legal, fair, not show bias, respect privacy, and be safe and secure.

Showing what AI is doing and being responsible for it will be very important, with suggestions for telling people when they are dealing with AI systems. The committee also stressed the need for people to check what AI does, for everyone to be included, and for protections that defend the country’s interests, while still letting useful new things happen.

Dealing with risk, things AI should not do, and protections for different areas

The committee intends to make a system for rating risk, to find out which uses of AI by the government are low-, medium-, or high-risk. Areas that are high risk – and being looked at closely – are health care, education, policing, and giving welfare, because mistakes in these areas can have very bad results.

It will also list practices that are not allowed, such as ‘social scoring’, spying on people against the law, and making unfair profiles of people. Suggestions will cover checks by people who are not connected with the government, ways to check things, and rules for buying things that demand that sellers be open and responsible.

Rules for data and matching with the law

Rules for data are at the centre of what the committee is planning. Suggestions will refer to India’s Digital Personal Data Protection Act to make sure that personal data is handled in ways that meet the legal rules for permission, only using it for the reasons it was collected, and keeping it safe.

The group will look at ways to cut down on the amount of data, make data anonymous, and share data safely between agencies. Strong data rules are meant to cut down on bias in AI models and to protect people who are likely to be harmed from misuse.

Putting things into effect, checks, and rules for buying things

Putting things into effect will stress practical ways, such as checks by people who are not connected with the government, ways for people who are affected to get help, and clear rules for departments that take on AI tools. The committee will suggest standards for testing, proving, and getting assessments from people outside the government.

Rules for buying things may include making assessments of the effects of AI necessary, having open interfaces to make things clear, and putting in agreements to make sure that audits can be done and models can be explained. These steps are meant to make what is done able to be checked, and to build trust from the public.

What this means for Karnataka’s AI world and public services

Leaders in the state say this is central to Karnataka’s goal of a ‘Deeptech Decade’. What the committee does is meant to let new ideas happen, while also keeping the trust of the public – which is necessary for more AI to be used in public services.

If the system is put into effect well, it could speed up economic growth, make new types of jobs, and put Karnataka in the lead in putting AI into effect responsibly. The committee’s balanced approach is meant to bring together technical ambition and clear moral rules.

The committee’s first meeting set a practical tone: encourage new ideas, limit harm, and make clear, rules that can be enforced for AI in the public sector. The short time the committee has suggests that the state wants useful advice quickly, while the many different people on the committee show that the policy is meant to be both careful and able to be put into effect.