Artificial Intelligence (“AI”) has witnessed an unprecedented growth with integration into the various aspects of day-to-day tasks of individuals. The use of AI by employees is no exception to the same. This necessitates that any entity which is involved in the potential use of AI in India by employees would be required to structure and prepare the internal controls, ensure data protection, strict risk-aversion contracts and compliance with the laws and regulations. While the Indian law does not restrict or prohibit the use of AI in the workplace, though the legal landscape in India mandates that an entity has the obligation to protect the data collected by the entity as well as ensure that it is held accountable for any breach of confidentiality and cyber security breaches.
Practically, the employer while strategising the use of AI by employees must ensure that it is compliant with the Information Technology Act, 2000 and Digital Personal Data Protection Act, 2023 and to this effect the employer in India implements a well drafted and structured AI policy, terms of employment, and privacy policy. Employers must carefully balance the need for AI as a powerful tool in operation and the potential security and legal risk which may arise out of the casual use of AI.
The article identifies the legal and compliance framework that is required to guarantee a safe use of AI among employees in India, Workplace AI compliance India, Data protection risks of AI tools, Employee use of generative AI at work, and an effective AI governance framework in companies.
Is the AI tool use on the part of the employees in India legal?
The Indian legal regime does not have a single regulation which restrict or prohibit the use of AI by the employees. However, the usage of AI and the content generated by AI may create the statutory liabilities arising out of various statutory regulations governing spectrum such as data protection, breach and other sector specific regulations.The safe use of AI by Indian employees should be in compliance with the requirements under the following laws and the rules enforced thereunder:
- The Digital Personal Data Protection Act, 2023 (“DPDP Act”).
- The Information Technology Act, 2000.
- Sector specific regulatory frameworks.
Beyond the laws and regulations, the employer must also ensure that the AI usage does not violate any confidential obligations which may be applicable. It is thereby evident that the regulations are not governing the usage of the AI but rather are focused on the data, which is processed by the AI tools. Thus, the employer to mitigate the potential risk must ensure implementation of a policy to control the use of AI within its organisation. The potential risks which may arise out of use of AI by employee are as follows: :
- Data breach and the associated penalty imposed under the DPDP Act.
- Disclosure of the trade secret to the public.
- Exposure to regulatory enforcement and investigation.
- Liability for the violation of contract with third party or clients.
Thus, a formal AI policy towards employees in India is essential to ensure good internal governance and following the best industry practices.
What data protection risks arise when employees use AI tools at work?
The most prominent regulations that are applicable on the AI use arises out of the processing of personal data by the employee. Under the new regime the processing of personal data under the DPDP Act constitutes an offence if such processing has been undertaken without prior consent of the data principal.
The DPDP Act has the following objectives:
- Personal data should only be processed in accordance with law.
- The processing shall only be conducted upon receipt of consent or on the grounds of legitimate use.
- The data fiduciary should have implemented reasonable security protection to protect the personal data.
Data protection risks of AI tools can be punishable by DPDP Act in case of it is determined that the processing of personal data is conducted without prior consent of the data principal.
Key risks occurring incidents include:
- Posting customer databases without the consent of the data principal.
- Uploading and processing of the health or financial data in AI.
- Transferring data to foreign jurisdictions through AI platforms without determining the restrictions applicable on such country.
- The employer can thereby engage the following activities to mitigate risk arising out of AI:Prohibit the use of AI within the organisation.
- Implementation of AI tools which has been approved by the organisation.
- Conduct data protection audit.
- Internal governance mechanism which establishes the procedures for the approval of high-risk processing.
Employers should be aware that liability in terms of the DPDP Act is imposed on the data fiduciary i.e. the organisation in its entirety and not the employee using the AI tool.
Will use of AI tools by employees put companies at risk of confidentiality and IP?
Yes, the use of AI tools greatly increases the risk of confidentiality breach and IP based disputes. The AI tools follow a model which requires the employee to provide instructions in the form of data which is likely to reveal sensitive data.
Risks arising out of the use AI includes:
- Disclosure of trade secrets and customer information.
- Publishing of the confidential algorithm.
- Create disputes arising from intellectual property such as copyright.
According to the contract and intellectual property principles:
- The employee may be governed by the confidentiality clause.
- Employers shall be held responsible for violation of confidentiality.
- The IPR to AI-generated content may lead to disputes.
To control employee’s use of AI the employer may implement the following mechanism:
- Prepare a well-structed AI policy.
- Limit the use of AI especially platforms without the terms of data retention.
- Prohibit using client sensitive information on AI tools.
A successful AI policy should incorporate IP, confidentiality, and technology controls.
Is there a risk of cybersecurity and regulatory compliance due to the use of AI?
Yes, if the AI tools are used by the employee in an un-checked manner then it is likely to create regulatory risk.
The cybersecurity risks include:
- Prompt-based data exfiltration.
- Malware and phishing attacks which has been camouflaged as AI.
- A being used as a tool for generating misinformation.
By the IT Act and other cybersecurity regulations:
- Entities are required to maintain resonable security measures.
- Lack of protection of sensitive data create liability on the financials and reputation of the company
In the case of listed entities,the framework by SEBI governing cyber security are further required. To make the use of AI use is safe among the employees in India, the employer must ensure that:
- AI tools are screened for the purpose of IT security.
- Periodic audit.
- Due diligence of AI vendors.
How should companies draft an AI policy?
An AI policy is a binding internal compliance document which may also be integrated into employment contracts, IT policies, and data protection frameworks. The policy should contain provisions which govern the use, restrictions, and allocate accountability. An AI policy mitigates risk exposure under data protection, ensure maintenance of confidentiality, and sectoral compliance. To ensure Safe AI usage by employees in India, the policy should address the following components.
1. Scope of permitted use
- The policy must clearly define the following key elements:Approved certified AI platforms which have relevant data security measures.
- Clear restriction on the use of unapproved AI platforms.
- Definition of specific purposes for which AI platforms can be used.Explicit prohibition on personal or unrelated commercial use on company systems.
2. Data handling and confidentiality restrictions
The policy should include:
- Prohibition on uploading confidential, personal data, trade secret and proprietary code. Requirement to anonymize sensitive data before processing the data on AI platform.
- Encryption and access-control standards for enterprise AI platforms.
These safeguards directly mitigate data protection and confidentiality risks. Without them, companies expose themselves to regulatory penalties and contractual liability.
Data governance provisions should align with internal privacy policies and statutory obligations under applicable data protection law.
3. Approval and risk assessment mechanism
AI adoption must be governed by the internal policies of the entity, and the employees should only use the AI tools which have been duly approved by such policies.
The policy should unequivocally state as follows:
- Assessing of the security standard maintained by AI platform before onboarding.
- Vendor due diligence covering data retention, hosting location, and subcontractors.
- Risk assessment for the impact of AI platforms.
The AI adoption must meet the security standards which aligns with entities risk acceptance.
4. Monitoring, audit, and logging controls
Oversight mechanisms are necessary to detect misuse and demonstrate compliance which can be implemented by implementing the following:
- Maintaining a record of access log for the AI platform.
- Periodic internal compliance and audit
- Prepared a record of AI-generated content.
- Mandatory reporting misuse or data leakage.
Monitoring must remain proportionate and transparent. Employees should be informed of compliance and audit procedures that are involved with the AI activity. Proper logging strengthens the defence of the entity in regulatory investigation.
5. Intellectual property and output ownership
The policy must clarify the following about the intellectual property rights about AI generated content:
- The IPR arising out of the AI tools in the course of employment belongs to the employer.
- That employees must ensure the accuracy and originality of AI-generated content.
Clear allocation of ownership prevents disputes and reinforces internal governance.
6. Disciplinary framework
An AI policy must have enforceable consequences.
It should:
- Define the events in which the AI usage will be deemed as misconduct.
- Include mechanism for reporting of the data breaches or security incidents.
A policy without consequences lacks deterrent value. Accountability reinforces compliance culture.
7. Training and awareness requirements
Policy effectiveness depends on employee understanding.
The company should:
- Conduct periodic AI compliance training with the employees.
- Circulate the AI tools which are adopted upon acceptance of the same by the entity.
- Provide written advisories on evolving risks and any amendment to the policy.
Training demonstrates that the employer exercised reasonable care in supervising AI deployment.
8. Alignment with broader governance framework
The AI policy should not operate in isolation and must integrate the legal requirements by having the following:
- IT act requirements.
- Data protection and privacy frameworks.
- Confidentiality and IP protection clauses.
- Cyber incident response procedures.
How can employers monitor AI usage without violating privacy?
The monitoring of the AI use shall be in accordance with the constitutional and statutory privacy standards.
Employers may:
- Monitor the use of the corporate devices.
- Maintain internal corporate applications audit logs.
However:
- Surveillance should be conducted in restricted manner and only for the specified purpose.
- The employees must be informed that they are being monitored and the purpose of monitoring the employee.
- Policies should prescribe the mechanisms for monitoring the employee.
Agreements and policies in of the organisation cannot violate the provisions granting the protective measures enshrined in the law.
What are the consequences of the misuse of AI?
The misuse of AI arises out of unethical practice which may include as follows:
- Violative of the company policy.
- Breaches the confidentiality provisions.
- Compromises data security measures.
- Causes reputational harm.
The disciplinary measures can comprise:
- Written notice.
- Suspension of employment.
- Termination in the event of severe misconduct.
- Claim for damages.
Before imposing penalties:
- Conduct an internal investigation.
- Record the incident of the breach.
- Principles of natural justice is observed.
Is it mandatory that AI-related data breaches be reported by companies?
Under the DPDP Act, data fiduciaries are obliged to report breaches of personal data to the Data Protection Board of India and data principals within the prescribed timeline and in accordance with the mechanism prescribed under the law.
If data breach occurs due to the AI usage, the following are impacts of the data breach:
- Unauthorized disclosure of data,
- Loss of personal data,
- System compromise and leak of confidential information,
How can multinational companies align global AI policies with Indian law?
Indian compliance requires:
- Compliance with the DPDP Act requirements.
- Compliance with the IT Act security requirements.
- Adherence to sector specific regulations.
The AI-based platform of potential cross-border data transfer should take into account:
- The implications of data localisation.
- Contractual safeguards to protect the data.
- Vendor agreements ensuring data processor requirements.
Frequently Asked Questions (FAQs)
Can employees use ChatGPT or other AI tools at work in India?
Yes, the employees can use the AI tools in India as there is no statutory provisions prohibiting the same. However, the employer retains the right to formulate policies and agreements which can be utilised to regulate and restrict AI usage.
Is it illegal to upload company documents to AI platforms?
While it is not illegal for the uploading of documents in the AI platforms, the breach of the applicable data protection, confidentiality, or other contractual obligations may lead to legal implications.
Who is responsible if AI-generated output contains incorrect or defamatory content?
Employer may face civil regulatory proceedings for any defamatory, misleading, or unlawful content which has been generated by employee using AI tools. Employer may initiate internal proceeding against such employee for any such incorrect or defamatory content.
Can employers restrict AI tools completely?
Yes, employers may prohibit the use of AI tools within their organisations by implementation of a clear and well documented in internal policies to ensure restriction on the usage of AI platform.
Does AI usage require employee consent under Indian data protection law?
Employee are required to obtain the consent of any third party whose personal data is being processed in the AI platform. The processing of any such data shall only conducted on a lawful basis. However, the governing standard depends on whether the employer qualifies as a data fiduciary and the nature of processing involved.
Are startups required to implement formal AI governance policies?
There is no specific statutory mandate requiring startups to implement standalone AI governance policies. However, the policy ensures that the start-up has well structured business operations and governance mechanisms ensuring reasonable security practices and regulatory diligence.
Conclusion
AI usage policy enforced by employers in India provides the provisions on governance or prohibition on the use of AI. While the present legal regime does not regulates the AI, the consequences of AI misuse is regulated under the Indian law. Employers must proactively mitigate risk arising out of the data protection, copyright and other laws in India by structuring a well-constructed AI policy.
About Us
Corrida Legal is a boutique corporate & employment law firm serving as a strategic partner to businesses by helping them navigate transactions, fundraising-investor readiness, operational contracts, workforce management, data privacy, and disputes. The firm provides specialized and end-to-end corporate & employment law solutions, thereby eliminating the need for multiple law firm engagements. We are actively working on transactional drafting & advisory, operational & employment-related contracts, POSH, HR & data privacy-related compliances and audits, India-entry strategy & incorporation, statutory and labour law-related licenses, and registrations, and we defend our clients before all Indian courts to ensure seamless operations.
We keep our client’s future-ready by ensuring compliance with the upcoming Indian Labour codes on Wages, Industrial Relations, Social Security, Occupational Safety, Health, and Working Conditions – and the Digital Personal Data Protection Act, 2023. With offices across India including Gurgaon, Mumbai and Delhi coupled with global partnerships with international law firms in Dubai, Singapore, the United Kingdom, and the USA, we are the preferred law firm for India entry and international business setups. Reach out to us on LinkedIn or contact us at contact@corridalegal.com/+91-9211410147 in case you require any legal assistance. Visit our publications page for detailed articles on contemporary legal issues and updates.
Legal Consultation
In addition to our core corporate and employment law services, Corrida Legal also offers comprehensive legal consultation to individuals, startups, and established businesses. Our consultations are designed to provide practical, solution-oriented advice on complex legal issues, whether related to contracts, compliance, workforce matters, or disputes.
Through our Legal Consultation Services, clients can book dedicated sessions with our lawyers to address their specific concerns. We provide flexible consultation options, including virtual meetings, to ensure ease of access for businesses across India and abroad. This helps our clients make informed decisions, mitigate risks, and remain compliant with ever-evolving regulatory requirements.

