Skip to content

Legal aspects

1. Introduction

Switzerland—and Swiss organisations more broadly—stands at a turning point: AI adoption is surging across sectors even as regulators scramble to fill legal gaps. This report clarifies three intertwined domains—copyright, data protection under the Swiss Data Protection Act (DSG), and the EU AI Act—showing what steps Swiss entities must take today to comply with rules already in force (EU AI Act) and those coming in 2026 (Swiss AI regulation).

BeeChat is our running example: an AI-powered chatbot platform built for Swiss educational institutions, offering on-demand tutoring, content summaries, and interactive learning sessions. As schools and universities integrate AI tools like BeeChat, they benefit from new capabilities but also face novel legal challenges.

Through the lens of BeeChat, this document will:

  • Demonstrate how copyright, data-protection, and AI-specific rules intersect in real-world deployments
  • Expose the “grey zones” where legal clarity is still evolving
  • Recommend measures to ensure BeeChat—and similar AI solutions—operate lawfully under Swiss law and the EU AI Act

By grounding abstract legal concepts in BeeChat’s context, we make compliance actionable for ed-tech teams, compliance officers, and institutional leaders.

Disclaimer: As we have no legal background, this report is based solely on freely accessible online resources and should not be considered legal advice.


2.1 AI-Generated Works

Traditional copyright statutes assume human authorship. When a generative model like ChatGPT produces text, images, or code without direct human input, Swiss law lacks clear guidance. On one hand, non-human-authored works could fall into the public domain. On the other, if a model has memorised copyrighted inputs and reproduces them, users may inadvertently infringe. OpenAI’s policy states it does not claim ownership of model outputs, effectively shifting any downstream liability onto the user. Swiss organisations must therefore recognise that “free to use” outputs may carry hidden copyright risks.

2.2 Training on Protected Works

AI models require massive datasets, often scraped from the public internet. Swiss DSG principles (§ 4 DSG)—data minimisation, purpose-limitation, transparency—apply even to non-personal content when it implicates IP. Unlike the U.S., Switzerland has no broad “fair use” for training; only narrow quotation and research exemptions exist. Recent lawsuits in the U.S. and EU challenge unlicensed training on copyrighted works. Although Swiss courts have not yet ruled, the pending Swiss AI regulation is expected to mirror EU transparency requirements, forcing providers to disclose training sources and licences.

2.3 Recommendations

  1. Transparency on Training Data
    • Publish a “Training Data Report” listing data sources, collection periods and license status. For out-of-the-box models, verify that the provider has declared their selection criteria, data provenance and any privacy safeguards. If you train or fine-tune a model yourself, you should likewise document the process—including additional datasets used, their provenance, licenses and training protocols—to ensure full transparency.
  2. Opt-Out Mechanisms

    • For example, if you train models, provide a clear process (web form, API flag) for rightsholders to exclude their materials.
  3. Contractual Safeguards

    • In all API or licence agreements, include a Data Processing Addendum (DPA) that prohibits providers from re-using customer inputs for further training or commercialisation without explicit consent.

3. Data Protection in the AI Lifecycle

The AI lifecycle describes the sequence of stages through which an AI system is created, tested, deployed and maintained. At each stage, different data-protection risks arise—from initial data collection through ongoing refinement—so organisations must apply tailored safeguards throughout. Below we briefly introduce each phase and then highlight the key data-protection considerations you should address in turn.

3.1 Phases & Key Risks

  1. Data Collection for Training
    Publicly available does not equal automatically lawful. E.g. under Art. 4 DSG, you must define lawful purposes, limit scope to what is necessary, and inform data subjects when personal data are involved.
  2. Model Training
    Processing personal data demands a legal basis—most likely an “overriding legitimate interest” under Art. 31 (2) DSG. This requires a documented balancing test showing why processing benefits outweigh privacy intrusion.
  3. Testing & Evaluation
    Cybersecurity requirements under Art. 7a DSG mandate integrity and confidentiality. Perform penetration tests and risk assessments before deployment to satisfy both Swiss and forthcoming EU AI Act rules on robustness.
  4. Model Refinement
    If you feed user chats or feedback back into training, you re-introduce personal data, triggering data subject rights, including access and erasure. An immutable model weight file complicates “right to be forgotten.”

3.2 Responsibility Allocation

  • Model Providers (e.g., OpenAI) must publish their data-processing policies, specify lawful bases, and offer opt-out.
  • Model Operators (Swiss businesses embedding pre-trained models) must validate that the provider’s processes comply with DSG, and ensure their own downstream uses remain lawful.
  • End Users bear the residual risk if they customise or retrain models on new personal data without adequate legal grounds.

3.3 Best Practices

  • Sign Data Processing Addenda with all AI service providers, explicitly forbidding re-training on customer data without consent.
  • Establish a Data-Governance Framework: maintain a data inventory, log lineage, and monitor usage across the model lifecycle.
  • Deploy an AI-Risk Management System: include periodic impact assessments, audit logs of data flows, and escalation procedures for breaches or regulatory inquiries.

4. The EU AI Act (and Swiss Transposition)

4.1 Scope & Timeline

The EU AI Act entered into force on 1 August 2024, regulating the placing on the market and operation of AI systems in the EU—whether developed within or outside its borders. Swiss entities selling or operating AI solutions in EU member states must comply immediately. Meanwhile, the Swiss Federal Council plans to adopt the Council of Europe’s AI Convention by end-2026, meaning that until then, many Swiss operators will follow the EU Act as a de facto standard.

4.2 Risk-Based Categories

  1. Prohibited AI Systems: Practices such as biometric social scoring or subliminal manipulation are banned outright.
  2. High-Risk AI Systems: Systems with potential impact on critical infrastructure, employment decisions, education admissions, law enforcement, or essential services.
  3. Limited-Risk AI Systems: Chatbots, deepfakes, and other tools requiring transparency disclosures (e.g., “you are interacting with AI”).
  4. Minimal-Risk AI Systems: Low-impact applications such as spam filters or video games, which remain unregulated.
  5. General Purpose AI (GPAI): Large foundational models that serve as building blocks; subject to specific documentation and reporting rules.

4.3 Swiss Implications

Swiss operators targeting EU clients for high-risk AI must:

  • Perform Conformity Assessments and maintain Technical Documentation demonstrating compliance.
  • Appoint an EU Representative if lacking an EU establishment.
  • Translate key documentation into an official EU language.

The future Swiss AI law is expected to replicate many high-risk obligations—especially around human oversight, robustness-by-design, and post-market monitoring.


5. Responsibilities & Grey Areas

  • Liability for Pre-trained Models:
    Who is liable when a model trained on copyrighted or personal data infringes rights? Providers often disclaim liability; operators must negotiate indemnities and define responsibility for data-curation failures.
  • Right to Be Forgotten:
    Swiss and EU data subjects can request erasure. Yet once data are baked into model weights, literal deletion is infeasible. Operators should document procedures to exclude or anonymise personal inputs at “fine-tuning” rather than embed them permanently.
  • Contractual Clarity:
    Master Services Agreements (MSAs) and DPAs must draw clear boundaries: who processes data, who stores logs, who maintains security checks, and what liabilities attach to breaches or non-compliance.

6. Actionable Recommendations for Swiss Organisations

  1. Enhance Transparency

    • Publish a Training Data Report: list all major datasets, dates, licence terms, and redaction policies.
  2. Strengthen Contracts

    • Include comprehensive DPAs and Service-Level Agreements (SLAs) with AI vendors, specifying prohibited processing and liability caps.
  3. Establish AI-Governance Processes

    • Form an AI Oversight Committee reporting to senior management.
    • Integrate AI modules into your existing Information Security Management System (ISMS) under ISO 27001 or equivalent.
  4. Staff Training & Awareness

    • Conduct mandatory workshops on the Swiss DSG, copyright fundamentals, and EU AI Act obligations for all teams working with AI.

Refer to the overview on Artificial Intelligence provided by the Federal Office of Communication (OFCOM) here.


7. Swiss Outlook

  • Council of Europe Convention: Adoption by end-2026 will align Swiss rules with EU core AI principles, reducing fragmentation.
  • Sectoral vs. Horizontal Approach: Switzerland may prefer sector-specific rules (e.g., in finance or healthcare) layered atop broad data-protection and safety standards.
  • Going Forward: Until formal Swiss AI legislation arrives, use the EU AI Act as your compliance blueprint: this will minimise re-work when local rules take effect.

8. Conclusion & Next Steps

Swiss organisations face three intertwined challenges: (1) clarifying copyright for AI-generated and AI-trained content; (2) meeting data-protection duties through the AI lifecycle; and (3) complying with—and preparing for—the EU AI Act and forthcoming Swiss regulation.

Top Priorities:
- Implement training data transparency and opt-out mechanisms.
- Update contractual frameworks (DPAs, SLAs, liability).
- Deploy an AI-Governance and risk management system with clear roles, audit trails, and executive oversight.

Ongoing monitoring of EU guidance, Swiss legislative drafts, and judicial decisions will be crucial. An early, proactive approach not only ensures legal compliance but also builds stakeholder trust in an AI ecosystem.


Appendix

  • Glossary:

    • GPAI: General Purpose AI foundation models
    • DPA: Data Processing Addendum
    • DSG: Swiss Federal Data Protection Act
    • ISMS: Information Security Management System
  • Key Resources:

    • Digilaw: AI & Copyright (https://digilaw.ch/08-05-04-chatgpt-co-und-urheberrecht/)
    • EU AI Act High-Level Summary (https://artificialintelligenceact.eu/de/high-level-summary/)
    • Swiss FDPIC Guidelines (https://www.edoeb.admin.ch/edoeb/de/home.html)