/VCG
As people increasingly turn to artificial intelligence (AI) for advice, some US lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.
These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.
In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases.
"We are telling our clients: You should proceed with caution here," said Alexandria Gutierrez Swette, a lawyer at New York-based law firm Kobre & Kim.
People's discussions with their lawyers are almost always deemed confidential under US law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private.
In emails to clients and advisories posted on their websites, more than a dozen major US law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court.
Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients.
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466