AI and the Law: What Every Student Should Know
Privacy, copyright, your digital footprint — the legal landscape around your daily life
When You Use an AI Platform, You Are Also a Source of Data
Every time you use an AI tool, data is being collected. The questions you ask, the conversations you have, the preferences you reveal — this is data that companies collect, analyze, and in many cases use to train future models. This is not a conspiracy theory. It is how these businesses work, and it is disclosed in their terms of service (which almost no one reads).
US privacy law is fragmented — there is no single comprehensive federal privacy law covering all consumers. What exists: COPPA protects children under 13. FERPA protects student educational records. HIPAA protects medical information. California has the strongest state-level privacy protections. For most students, the practical rule is: do not enter information into AI tools that you would not want the company to have — including your full name, school, home address, and personal information about others.
Your digital footprint and the law: Everything you post, share, or do online creates a record. Your school has legal authority over certain aspects of your digital behavior — particularly behavior that affects the school environment, uses school equipment or networks, or rises to the level of harassment or threat. The First Amendment protects opinion — it does not protect all speech in all contexts.
Copyright and AI-Generated Content
Copyright law is struggling to keep up with AI, and the legal landscape is genuinely unsettled. Who owns an AI-generated image? Who owns AI-generated music? Who owns a book written with heavy AI assistance? Courts and Congress are actively working through these questions.
For practical purposes as a student: material generated by AI may not have the same copyright protection as human-created work, which means you cannot necessarily claim AI-generated content as your intellectual property. And content that AI generates by training on existing copyrighted works raises questions about whether that training was lawful — questions currently being litigated in courts around the country.
AI in the Courtroom — Already Happening
AI is already in the American legal system in ways that affect real people's lives. AI tools are used in some jurisdictions to assess the likelihood that a defendant will commit another crime — so-called risk assessment tools that influence bail decisions and sentencing recommendations. Whether these tools are fair, and whether they replicate existing biases, is a major ongoing legal and policy debate.
AI-generated evidence — deepfake videos, AI-synthesized audio, AI-analyzed data — is increasingly appearing in litigation. Courts are developing standards for how to evaluate its reliability. The legal profession is in the middle of a significant transition, and understanding these issues is a genuine advantage for students interested in law.
Ready-to-Use Prompts
Copy these into ChatGPT, Claude, or any AI tool. Adapt for your situation.