Using Artificial Intelligence in Software and App Development – Legal Questions
Artificial intelligence in software and app development can dramatically speed up engineering work, but it also creates new legal risks: who is the author of the code, who is liable for errors, and how to structure contracts, data, and compliance correctly. In this article, you will find a practical overview of key risks and recommendations to help ensure AI does not become the trigger for disputes, fines, or loss of know-how.
Why AI in development is legally more sensitive than it looks
AI is often treated as “just another tool” in the engineering stack. Legal risk, however, does not arise from the mere use of AI—it arises from what ends up in production, how the product is marketed to customers, and what the company can prove if a problem occurs.
In practice, the most common scenarios include:
- AI assists with writing code and tests, but the company cannot demonstrate origin and licensing for parts of the output.
- AI is a product feature (recommendations, scoring, chatbot) and customers expect quality, explainability, and accountability.
- AI runs through a third-party API, but the company sells the capability “under its own name” and bears the reputational and contractual consequences of outages.
- Sensitive data is fed into AI tools (personal data, trade secrets, internal documents), leading to security incidents.
Who is the author of AI-generated code and what that means for your company
The question “who is the author” is often disputed in AI contexts—and in business, it has very tangible consequences: whether the code can be protected, licensed, transferred, sold to an investor, or used as a core company asset.
Copyright and originality: what is typically assessed
For software, it matters that copyright protection attaches to an original expression, not the underlying functionality or idea. In practice, this is why companies focus on:
- whether a particular part of code reaches the originality threshold,
- whether it is merely a standard or technically forced solution,
- whether the output is effectively copied or substantially similar to existing code.
AI complicates this, because output may be “correct and functional,” while the company cannot reasonably evidence:
- who created it,
- based on what inputs,
- whether commercial use is legally safe.
From a management perspective, in disputes and due diligence, the key question is rarely “how fast it was produced,” but rather whether the ownership and licensing position is clean and defensible.
AI output as a “work”: the most common weakness in real life
The most common weakness is surprisingly simple: the company has not clearly documented authorship and ownership (employee vs. contractor), and simultaneously uses AI tools without internal rules.
This often leads to situations where:
- a contractor claims the code is theirs and the customer only has a limited license,
- an employee leaves and the company cannot prove what was created within their job duties,
- an investor requests “clean IP” warranties and the transaction becomes complicated.
microFAQ – Legal tips on authorship and code ownership
-
Is the company automatically the owner of the code if it was developed for it?
Not always. It depends on the contractor agreement and whether there was a transfer of rights or only a license grant. -
Does using AI change ownership of development output?
It can. At minimum, it increases licensing and copyright risk and makes proof of origin harder. -
Is it safe to include internal code snippets in prompts?
Without internal rules and proper agreements with providers, it is risky due to trade secret leakage and data exposure.
| Risks and sanctions | How ARROWS can help (office@arws.cz) |
|---|---|
| Disputes over authorship and ownership: contractor or developer claims rights, leading to blocked use and damages | IP structuring: we draft transfers of rights, licensing terms, and evidence trail for outputs |
| Non-investable software: investor rejects the deal due to unclear code origin and licensing | IP due diligence: we map risks, propose remediation, and prepare transaction warranties |
| Hidden third-party elements: risk of lawsuits, distribution bans, or expensive rework | Legal audit of development: we implement internal controls and contractual protections |
| Weak employee development framework: loss of know-how, complications during departures | Employment + IP documentation: we adjust contracts, internal policies, and handover processes |
Liability for damages: if AI makes a mistake, who pays
Liability is often more important than authorship, because damages happen fast and can be measurable: contractual penalties, claims, incidents, customer churn, and regulatory impact.
With AI systems, liability typically turns on:
- whether AI provides “recommendations” or makes “decisions,”
- whether the customer understands how the system works and its limitations,
- whether the company ran adequate testing, monitoring, and quality control,
- how liability is allocated contractually across customers and vendors.
A common mistake: “a disclaimer will solve it”
Many companies rely on inserting “AI can make mistakes” into terms and conditions. That is only one part of the defense.
In practice, the company also needs:
- clearly defined service scope (what AI does and does not do),
- an incident and error-handling process,
- logging and traceability,
- contractual liability caps and carve-outs.
If AI is a product feature, it is reasonable to expect customers to require:
- uptime commitments,
- response times,
- meaningful responsibility for errors in critical workflows.
microFAQ – Legal tips on liability for AI features
-
Is the AI model vendor liable if the model makes a mistake?
Not necessarily. Liability often shifts to the integrator or the customer-facing provider. -
Does “human-in-the-loop” help?
Yes. Real human oversight can significantly reduce regulatory and liability exposure, but it must be genuine, not formal. -
What liability caps are typical in B2B?
It depends on sector and criticality. It is essential to structure carve-outs (e.g., intent, gross negligence, IP, and data breaches).
| Risks and sanctions | How ARROWS can help (office@arws.cz) |
|---|---|
| Claims and contractual penalties: faulty AI outputs breach SLA or cause customer loss | Contract + SLA review: we structure uptime, liability, caps, and claim processes |
| Incidents affecting customers: data loss, wrong decisions, AI API outages | Incident governance: we draft incident playbooks, communications, and legal defense |
| No proof of root cause: the company cannot explain why AI produced an output | Logging + documentation: we set audit trails and defensible evidence processes |
| Broken liability chain: vendor, integrator, and customer shift blame | Vendor negotiations: we structure indemnities and protections across the chain |
Contracts in AI development: what must change vs. standard software deals
AI projects frequently fail not due to technology, but due to contracts and mismatched expectations. “Fixed deliverables for a fixed price” templates do not work well because AI development is iterative, data-dependent, and vendor-dependent.
What should always be explicit in the agreement
- IP transfer and licensing: clearly define ownership of outputs, including models, prompts, datasets, and documentation.
- Scope definition and change control: AI development evolves quickly; without proper change management, disputes on price and scope are almost guaranteed.
- Data responsibilities and data quality: if the customer provides poor data, AI output will be poor—this must be contractually addressed.
- SLA, availability, and fallback scenarios: third-party APIs can fail. Your company must align what you promise to customers with what you negotiate with providers.
- Security and confidentiality: prompts, logs, and inputs may contain trade secrets. Without confidentiality controls, know-how leakage becomes real.
Internal AI policy: why investors expect it
An internal AI policy is not “compliance paperwork.” Investors and corporate customers increasingly check:
- how the company prevents data leakage into external AI tools,
- who is allowed to use which tools,
- whether AI usage is traceable in outputs,
- how audits and incidents are handled.
A good policy must impact real practice: what can be entered into prompts, how logs are stored, how releases are approved, and how open-source compliance is checked.
GDPR and data protection: AI quietly expands exposure
In software development, GDPR risk does not live only in databases. It also appears in operational traces:
- AI chat logs,
- customer bug reports,
- call transcripts,
- test datasets,
- monitoring and telemetry.
A legal problem typically arises when personal data is processed via AI and it is unclear:
- what the lawful basis is,
- who is the controller and who is the processor,
- where data is stored and who can access it,
- whether data is transferred outside the EU.
AI usage seems simple, but real-life setups include exceptions, dependencies, and process details that non-specialists often miss.
| Risks and sanctions | How ARROWS can help (office@arws.cz) |
|---|---|
| GDPR breach: unlawful processing, risk of fines and reputational damage | GDPR for AI setup: we structure roles, agreements, minimisation, and usage rules |
| Trade secret leakage: internal documents or code exposed to external tools | Confidentiality + security: we implement internal regimes, provider terms, and controls |
| Unclear log regime: prompt logs contain personal data without a legal basis | Processes + documentation: we define retention, access, and auditability |
| Transfers outside the EU: poorly structured cloud services and data flows | Vendor management: we review providers and structure contractual safeguards |
International footprint: AI development often crosses borders and becomes more complex
AI projects commonly combine:
- a Czech company as product provider,
- a distributed engineering team abroad,
- cloud infrastructure outside the EU,
- customers across multiple jurisdictions.
This triggers practical legal needs around:
- governing law and jurisdiction,
- data transfers and security,
- enforceability against vendors,
- divergent corporate procurement requirements.
Why it is rarely worth handling this internally without specialists
AI in development looks technical, but the legal impact usually appears when:
- a customer raises a claim,
- an incident occurs,
- due diligence starts for an investment or sale,
- a vendor changes terms,
- a dispute arises about code ownership.
Each of these events is costly under pressure. Many risks can be reduced early through contracts, internal rules, and control processes.
Lawyers at ARROWS law firm handle this agenda daily and can reduce both time and error risk. In addition, ARROWS law firm is insured for damages up to CZK 400,000,000, which is a key safety factor for corporate clients.
Executive summary for management
- AI in development is not just a tool—it creates real legal exposure across IP, liability, contracts, and data, usually surfacing during incidents or due diligence.
- The key risks are disputed authorship/licensing, customer claims and damages from AI errors, and leakage of data or know-how through prompts and logs.
- Without proper contractual structuring (IP, SLA, liability, vendor management), risk often lands on the company selling the product to customers.
- Internal AI rules and controls are no longer optional: corporate customers and investors increasingly treat them as a prerequisite.
- Handling this reactively is expensive.
Conclusion
Using artificial intelligence in software and app development brings speed and innovation, but it also raises disputed questions of authorship, code ownership, liability for errors, and data protection. In practice, it is essential to have clear contracts, processes, and AI usage rules in place so the issues do not surface only during claims, incidents, or investor due diligence.
FAQ – Most common legal questions on Using Artificial Intelligence in Software and App Development – Legal Questions
-
Should AI usage in development be disclosed in the customer contract?
In most B2B cases, yes—especially if AI affects outputs, SLAs, or liability allocation. Clear disclosure helps prevent disputes. If you face a similar issue, contact ARROWS law firm at office@arws.cz. -
How can a company reduce the risk that AI output infringes third-party rights?
A combination of internal policies, license checks, code review, and contractual safeguards (including vendor indemnities) is key. If you face a similar issue, contact ARROWS law firm at office@arws.cz. -
Who is liable if an AI feature causes losses for a customer?
It depends on the contract structure, service description, liability caps, and quality controls. For critical systems, exposure can be material. If you face a similar issue, contact ARROWS law firm at office@arws.cz. -
Is it safe to input internal documents or code into AI tools?
Without provider safeguards and internal controls, it can expose trade secrets and personal data. Minimisation and vendor review are recommended. If you face a similar issue, contact ARROWS law firm at office@arws.cz. -
What most often complicates investments in software companies using AI?
Unclear IP ownership, missing license evidence, risky open-source use, and weak customer/vendor contracts. If you face a similar issue, contact ARROWS law firm at office@arws.cz. -
Does an internal AI policy make sense for smaller companies?
Yes—because it protects data and know-how, improves development discipline, and helps with larger customers. If you face a similar issue, contact ARROWS law firm at office@arws.cz.
Notice: The information contained in this article is of a general informational nature only and is intended to provide basic orientation in the topic. Although we strive for maximum accuracy, legal regulations and their interpretation evolve over time. To verify the current wording of the regulations and their application to your specific situation, it is therefore necessary to contact ARROWS law firm (office@arws.cz). We accept no liability for any damages or complications arising from independent use of the information in this article without our prior individual legal consultation and professional assessment. Each case requires a tailored solution, so do not hesitate to contact us.
