The article undertakes an extensive discussion on the legal responsibility arising from AI-generated errors and instances of incorrect or misleading advice. As per the Indian legal framework, it analyzes the evolving Indian regulatory landscape, including guidance issued by ministry of Electronics and Information Technology (Meity) and oversight by sector-specific regulators such as the RBI and TRAI. At the same time, it draws on global developments and international incidents to place the discussion in a broader context.
The analysis is structured in a question-and-answer format, focusing on key issues such as AI-related risks, existing and emerging regulatory frameworks, governance and risk-management practices, and the scope of duty of care in the design and deployment of AI systems. The objective is to provide in-house legal teams, technologists, policymakers, and researchers with a practical understanding of this emerging area of law with real-world examples and guidance to anticipate and manage potential AI-driven liability.
What is the meaning of AI Accountability and AI Liability?
AI accountability refers to a general concept that those who develop, own, and use AI systems must remain answerable for the outcomes they produce. At its core, it ensures that a human decision maker can be tracked back (ethically or legally) in case an AI causes harm or produces flawed outcomes. Instead, AI liability is a more precise legal term that determines the parties that are responsible when an AI system’s action results in loss, injury, or damage liability addresses who bears the legal consequences whether in the form of compensation, penalties, or other sanctions when something goes wrong.
Overall, these concepts reinforce a central principle that AI systems cannot operate in a legal vacuum. When AI makes harmful decisions, the law must be able to identify responsible individuals or entities, either by internal regulation or criminal prosecution.
Why Are AI Errors and “Wrongful Advice” Legally Problematic?
Errors generated by AI systems and the provision of incorrect advice through automated tools pose significant legal challenges because they disrupt traditional models of responsibility.
Malpractice or negligence liability can be experienced in most professions as a result of bad advice or error (e.g., a doctor misdiagnosing, an accountant miscalculating, etc.). The difficulty arises when similar errors are produced by an AI system. Several factors contribute to this complexity:
- Unpredictability and Complexity: Advanced AI models, particularly those based on machine learning models, can behave in ways that are difficult to predict.
- Black Box Decision-Making: Many AI systems, such as deep neural networks, operate as “black boxes”, meaning that their results are not easily explainable.
- Several Actors Involved: The AI ecosystem typically involves numerous hands-on data providers, model developers, system integrators, and users, making it harder to pinpoint where responsibility should lie.
- No AI Personality: AI systems have no legal personality and are not recognised as legal persons.
Examples: When AI Goes Wrong, Who Is Accountable?
Looking at specific situations helps explain how legal responsibility for AI- related mistakes or wrongful advice may arise:
- Autonomous Vehicle Accident: Considering a self-driving vehicle that is controlled by AI-driven software. If such a vehicle crashes, a question immediately arises about who should be held responsible for the resulting damage.
- Artificial Intelligence in medicine going wrong: Take the example of an AI-powered medical diagnosis application that provides advice to a patient at home. In case the app provides incorrect guidance to a user, that their symptoms are minor when they are actually serious, the delay in seeking appropriate care might result in serious injury or even death.
- Financial Advice and Robo-Advisors: AI-based financial advisors and trading algorithms raise important liability concerns in a highly regulated sector. If an investor follows advice generated by AI and suffers major financial losses, can legal action be taken against the provider of that tool? Comparably, if a human financial advisor presented unbelievably poor advice that violated norms, he could face liability of negligence, or regulatory fines could be taken against him. Companies providing advice on AI usually avoid liability by explaining that the services are informational and not personalized advice, and by requiring the user to accept the associated risk.
- Chatbot Giving the Wrong Legal Advice: A particularly relevant example involves an AI chatbot (such as a generative AI assistant) providing a person with incorrect legal advice. For instance, a small business may rely on a free AI tool to draft a crucial contract or answer a legal question only to later find out that the output of the AI was inaccurate, leading to financial loss or legal disputes.
How Do Existing Laws Apply to AI? (Negligence, Product Liability, and More)
In most jurisdictions, including India, there is currently no dedicated statute dealing exclusively with AI liability. Instead, legal responsibility for harm caused by AI is derived from traditional legal doctrines: negligence, product liability, breach of warranty, and consumer protection laws. Here’s how they come into play:
- Negligence and Duty of Care: Negligence is a core concept of tort law. To establish negligence, a complainant is required to demonstrate that the defendant had a duty of care, that the defendant violated the duty of care, breached that duty by failing to act in a reasonable manner, and caused harm as a result of that breach
- Product Liability (Defective Products): Product liability regimes, particularly under consumer protection laws, impose strict liability on manufacturers and sellers for defective products that cause injury. Consumer Protection Act (CPA) 2019 was introduced as a structured framework for product liability claims.
- Breach of Warranty or Contract: Many AI tools are provided under contract or service terms. If an organisation purchases an AI solution that fails to perform as agreed and causes losses, the purchaser may bring a claim for breach of contract or breach of warranty.
- Legal Requirement and Industry Laws: The use of AI doesn’t remove existing legal duties in regulated industries. For example, when a bank opts to use AI for lending decisions, it must still follow the banking regulations, including no illegal discrimination. The RBI recommendations, including the FREE-AI report, 2025, emphasize that ultimate accountability for AI use rests with boards and senior management, even where third-party technology is involved.
- Criminal Liability: A more complex question arises where AI-related conduct would otherwise amount to a criminal offence. As an example, if an autonomous vehicle causes a deadly accident, can criminal negligence or manslaughter prosecution be brought against the developers or users? This is unexplored to a large extent, and untested, although in earlier casesinvolving autonomous vehicles, human supervisors have faced criminal charges for failing to properly monitor the system.
Global Regulatory Trends: How Are Other Countries Regulating AI Decision-Making?
Across jurisdictions, policymakers are actively engaging with how AI decision-making should be regulated and where legal responsibility should lie when things go wrong. While India is currently making a more dynamic, sector-focused approach (examined in the next section), developments in jurisdictions such as the European Union, the United States, and the United Kingdom remain crucial as they govern the global regulatory standards and business expectations.
- European Union – AI Act and Liability Directives: A complete proposal of AI regulation has been presented by the EU. EU AI Act expected to be enforceable in between 2025-2026, follows a risk-based method to regulation. Although it does not directly deal with compensation for harm, it sets stringent obligations on “high-risk” AI systems used in areas such as healthcare, finance, transportation, and employment. These obligations include risk evaluation, transparency, human oversight, accuracy, and resilience. The AI Act focuses on preventing harm prior to occurring such as stating that high-risk AI must be designed with fail-safes and carefully tested against bias or error and undergo testing for bias and potential errors. Non-compliance can lead to hefty fines, indirectly enforcing AI governance and risk management practices in companies.
- United States – Current Laws and New Policy Development: As of early 2026, the U.S. has not passed any federal law that specifically regulates AI liability. Instead, the regulation has developed through sector-specific rules and existing legal frameworks. As an example, autonomous vehicles in the United States are addressed through product liability claims and traffic and safety regulations.
- United Kingdom and Others: The UK has so far been a principles-based and comparatively light-hearted regulatory stance. In 2023, the UK government released an AI white paper that promoted non-binding guidance and allowed existing regulators to consider AI factors into account as part of their mandate, an approach broadly aligned with India’s regulatory philosophy.
- International Bodies: At the international level, several organisations have established broad principles on the responsible use of AI. While these are not binding yet, they continue to influence the national policy and regulatory approach across jurisdictions.
India’s Approach: Sectoral Regulators and Emerging Guidelines
India has consciously chosen not to rush into enacting a single, overarching AI Law. Instead opted to modify the current legislation and permit sectoral regulators to control the usage of AI. As part of the IndiaAI initiative, the ministry of Electronics and Information Technology (MeitY) released the India AI Governance Guidelines in November 2025. These guidelines layout a policy blueprint for the safe and responsible AI across industries to represent the strategy that India has taken to encourage innovation while addressing potential risks. The key features of the India’s approach are:
- No Umbrella AI Legislation: The government for now, has decided against introducing a standalone AI statute. Instead, it validates that the regulators such as the Reserve Bank of India (RBI), Securities and Exchange Board of India (SEBI), Insurance Regulatory and Development Authority (IRDAI), Telecom Regulatory Authority (TRAI), and others, develop AI rules that are sector-specific.
- The Seven Sutras – Principles to Responsible AI: MeitY’s guidelines consist of seven main principles or sutras, which form the foundations of AI governance in India. These include: Trust, People first, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design and Safety and Resilience. Several of these principles are especially pertinent to accountability and liability:
- “Trust”and “People First” emphasize that human well-being and control are the most important ones and must remain central, particularly where AI systems affect individual rights or safety.The “Accountability” expressly calls for a clear allocation of responsibility in the deployment of AI, which is why the developers and deployers will become visible and accountable. It requires that responsibility for AI outcomes be traceable to a project lead, a vendor, or another identifiable party so that accountability is not diluted.“Understandable by Design” focuses on transparency and explainability, ensuring that users and regulators can understand how an AI system functions. This has direct implications for liability as opaque systems are difficult to audit.
- “Safety, Resilience and Sustainability” requires AI systems to be designed to minimise risk and withstand misuse or failure. Demonstrating adherence to these principles can show due diligence, while the lack of safeguards might serve as evidence of negligence.
- Proportionate Liability Framework, Graded: The guidelines support a graded system of liability. This implies that the level of risk and the role played by the AI value chain should be reflected in the liability and accountability. High-risk AI applications would attract greater responsibility and presumably more of the levels of care compared to low-risk applications.
- Assigning Liability to the AI Value Chain: The guidelines acknowledge that AI systems involve multiple stakeholders. They emphasize the importance of defining roles and responsibilities at each stage. Indicatively, one proposed approach is to look at the current legislation, such as the IT Act, and others, and streamline the issues of the liability of different participants in the AI supply chain.
- Practical Guidelines to Industry: The fourth section of MeitY guidelines offers practical recommendations to businesses and government agencies that use AI. These include self-assessment tools on the questions of human oversight, bias testing, data quality, etc. The guidelines further suggest:
- Appointing a senior management official responsible for AI governance in the organization.Ongoing staff training and regulatory and technical developments.Implementation of internal rules and audits to ensure accountability during the lifecycle of AI development.
- The laws should also provide grievance redressal to individuals who have fallen victim to AI decisions. Indicatively, in case a customer is refused a service by an AI, he or she must be able to appeal or request that of a human. This approach aligns with regulatory expectations in sectors such as finance, where the RBI’s AI framework emphasises grievance mechanisms and human oversight for critical decisions.
- Promoting Self-Regulation and Accountability Backstops: India also relies on voluntary AI governance and risk management systems, including industry codes of conduct, certifications, and standards, as a first layer of risk management. Examples include the guidelines of NASSCOM on responsible AI and the proposed AI standards by the Bureau of Indian Standards.
- Sector Regulator role: Several regulators have already taken steps or are expected to do so:
- RBI (Banking and Finance): The RBI constituted a committee that released the FREE-AI report in August 2025 titled “Responsible and Ethical Enablement of AI in Financial Sector”.
- SEBI (Securities): SEBI has long regulated algorithmic trading over the years with regulations to safeguard crashes and requirements for broker oversight of algorithms.
- TRAI (Telecom): In the telecom sector, AI may be used for network management and customer services. Network failures or bandwidth control in such systems would raise concerns around quality and consumer impact.
- Liability and courts in India: So far, Indian courts have not delivered a landmark judgement that imposes a liability on an AI-caused harm. However, courts are beginning to encounter AI-related cases in other legal contexts, indicating that judicial engagement with these questions is gradually emerging.
Practical Steps for Businesses: Managing AI Risks and Responsibilities
Managing the risk of AI error and AI liability from incorrect advice is a key concern of in-house law departments and companies using AI. The following are some practical steps and best practices that align with AI governance and risk management mindset:
- Perform AI Risk Assessment and Impact Analysis: Evaluate the potential risks before the deployment of an AI system.
- Deploy Strong Testing and Checking: AI systems should be tested on diverse, high-quality data, including edge cases. For instance, medical diagnostic AI must be trained across different demographics and rare conditions to avoid unsafe or inaccurate results.
- Transparency on Human Oversight and Control: Decide which AI-driven decisions require human review or approval. High-impact or sensitive decisions should not be left entirely to automated systems.
- Perform AI Risk Assessment and Impact Analysis: As part of risk assessment, consider whether the AI could disadvantage protected groups, generate unsafe recommendations, or cause harm in extreme scenarios. Planning for worst-case outcomes is essential.
- Transparency and Explainability: Where feasible, AI outputs should be explainable to users and internal auditors. For example, if an AI rejects an insurance claim, the organisation should be able to give a clear reason, such as inconsistencies in the submitted documents.
Conclusion
It is undeniable that AI technologies offer significant benefits and efficiencies, but they also give rise to new risks. Accountability and liability in AI are not about restricting innovation; it is about building confidence and safety nets to ensure that the development associated with AI remains sustainable.
FAQs – AI accountability and liability
1. What does AI accountability and liability entail?
AI accountability and liability refer to the responsibility of developers, operators, and users of AI systems for any harm caused by AI outputs, including mistakes, biases, or incorrect decisions.
2. Who is legally liable where an AI system provides incorrect advice?
When an AI makes a mistake, legal liability usually rests with a human or the other entity designing, deploying, or relying on the system rather than the AI itself.
3. Is AI liable to wrongful advice of companies?
Yes, Companies may be held liable if AI systems provide incorrect advice due to poor design, inadequate testing, or unsafe implementation.
4. Is there a legal duty of care of AI in the existing laws?
AI systems themselves do not owe a duty of care, but the organizations creating or implementing them take reasonable steps to prevent foreseeable harm.
5. What is the law on AI-induced legal responsibility issues?
Courts generally rely on existing current legal principles of negligence, product liability, breach of contract, and consumer protection laws to assess responsibility for AI-related errors.
6. What is the role of AI governance and risk management in liability?
Strong AI governance and risk management systems can help organisations to exercise due diligence and reduce liability in case AI-related harm occurs.
7. What is the future of AI decision-making regulation in the world?
Worldwide AI decision-making is changing and moving towards risk-based approaches, industry regulation, and liability instead of treating AI as an independent legal entity.
8. Do disclaimers entirely shield companies against liability of AI?
No. Disclaimers do not completely reduce AI liability and responsibility, especially in cases that involve consumer injuries, negligence or regulatory requirements.
About Us
Corrida Legal is a boutique corporate & employment law firm serving as a strategic partner to businesses by helping them navigate transactions, fundraising-investor readiness, operational contracts, workforce management, data privacy, and disputes. The firm provides specialized and end-to-end corporate & employment law solutions, thereby eliminating the need for multiple law firm engagements. We are actively working on transactional drafting & advisory, operational & employment-related contracts, POSH, HR & data privacy-related compliances and audits, India-entry strategy & incorporation, statutory and labour law-related licenses, and registrations, and we defend our clients before all Indian courts to ensure seamless operations.
We keep our client’s future-ready by ensuring compliance with the upcoming Indian Labour codes on Wages, Industrial Relations, Social Security, Occupational Safety, Health, and Working Conditions – and the Digital Personal Data Protection Act, 2023. With offices across India including Gurgaon, Mumbai and Delhi coupled with global partnerships with international law firms in Dubai, Singapore, the United Kingdom, and the USA, we are the preferred law firm for India entry and international business setups. Reach out to us on LinkedIn or contact us at contact@corridalegal.com/+91-9211410147 in case you require any legal assistance. Visit our publications page for detailed articles on contemporary legal issues and updates.
Legal Consultation
In addition to our core corporate and employment law services, Corrida Legal also offers comprehensive legal consultation to individuals, startups, and established businesses. Our consultations are designed to provide practical, solution-oriented advice on complex legal issues, whether related to contracts, compliance, workforce matters, or disputes.
Through our Legal Consultation Services, clients can book dedicated sessions with our lawyers to address their specific concerns. We provide flexible consultation options, including virtual meetings, to ensure ease of access for businesses across India and abroad. This helps our clients make informed decisions, mitigate risks, and remain compliant with ever-evolving regulatory requirements.

