Large Language Models Pose Growing Security Risks

By Steven Rosenbush, Wall Street Journal (subscription required) February 20, 2025

More powerful and pervasive large language models are creating a new cybersecurity challenge for companies.

The risks posed by LLMs, a form of generative artificial intelligence that communicates through language in a humanlike way, are already manifold. There is, for example, a danger that sensitive corporate or personal information inadvertently or deliberately will be exposed to models widely accessible to the public. There is also a possibility models can bring unsafe code or data into a company.

continue reading

Previous
Previous

New Jersey Updates Discrimination Law: New Rules for AI Fairness

Next
Next

AI ‘hallucinations’ in court papers spell trouble for lawyers