September 24th, according to Ssbcrack, oneUSAAttorneys in California for the use of artificial intelligence tools in appeals filed with state courts ChatGPT The creation of false quotations was sentenced by the court to a fine of $10,000 (note: the current exchange rate is approximately RMB 71149)。

ACCORDING TO THE INFORMATION RECEIVED, THIS IS THE MOST HEAVY TICKET THE CALIFORNIA COURTS HAVE EVER ISSUED FOR THE FORGERY OF AI. THE JUDGEMENT SHOWS THAT 21 OF THE 23 CITATIONS TO THE COUNSEL ' S OPENING STATEMENT ARE FICTIONAL. THE COURT STRESSED THE IMPORTANCE OF INDIVIDUAL VERIFICATION AND MADE IT CLEAR THAT REFERENCES IN ANY LEGAL INSTRUMENT MUST BE READ AND CONFIRMED BY COUNSEL IN PERSON AND MUST NOT CONTAIN UNVERIFIED SOURCES。
The second District Court of Appeal of California issued the decision, which was intended to give a strong warning to the legal profession. The Court noted that courts throughout the United States were increasingly faced with the issue of lawyers invoking false legal precedents. This decision coincided with the strengthening of California’s regulation, and the California Judicial Council has requested that judges and staff of the courts either ban the use of generated artificial intelligence or establish a clear use policy by mid-December. At the same time, the California Bar Association is revisiting its professional code of conduct to meet the challenges posed by the rapid development of artificially intelligent technologies。
The punished lawyer Amir Mostafavi admitted that he had not reviewed the content of AI before filing his appeal in July 2023. He stated that he had merely expressed himself through the ChatGPT Optimizing Instrument and had not realized that the tool would fabricated case quotations. Mostafawi argued that the total rejection of artificial intelligence is not realistic for lawyers, and AI is now an indispensable resource whose role is comparable to that of the transition from a substantive legal library to an online database, but warned: “As long as the AI system continues to generate erroneous information, legal practitioners must use it carefully. In the meantime, we are destined to see some victims, suffer some losses, experience some destruction.”
THE FINE IMPOSED ON MUSTAFAWI THIS TIME WAS CONSIDERED TO BE ONE OF THE HIGHEST FINES ISSUED AGAINST LAWYERS FOR ABUSE OF AI. EARLIER THIS YEAR, THE JUDGE OF A FEDERAL DISTRICT COURT IN CALIFORNIA AWARDED COMPENSATION TO TWO LAW FIRMS FOR OVER US$ 31 MILLION (AT AN EXCHANGE RATE OF ABOUT 221,000 YUAN) BECAUSE THE RESEARCH MATERIALS SUBMITTED RELIED ON ERRONEOUS INFORMATION GENERATED BY AI. THE JUDGE MADE IT CLEAR THAT HE FELT MISLED AND STRESSED THE NEED FOR STRONG DISCIPLINARY MECHANISMS TO DETER THE RECURRENCE OF SIMILAR INCIDENTS。
THE EXPERTS PREDICT THAT THE NUMBER OF CASES IN WHICH COUNSEL REFERS TO FICTIONAL JURISPRUDENCE WILL CONTINUE TO RISE AS THE GENERATION OF AI IS WIDELY USED IN THE LEGAL PROFESSION. ACCORDING TO A TRACKING STATISTICS, THE UNITED STATES HAS RECORDED OVER 600 SUCH INCIDENTS, OF WHICH ABOUT 52 OCCURRED IN CALIFORNIA. THIS RISING TREND HIGHLIGHTS THE CRITICAL NEED FOR LEGAL PRACTITIONERS TO BE EDUCATED AND VIGILANT, ESPECIALLY IN CAPACITY-BUILDING TO IDENTIFY AND VALIDATE AI-GENERATED INFORMATION。
Legal observers stressed that it was essential to increase awareness of AI technology, as many lawyers were not fully aware of the risks of AI “hallucination”, i.e. making up false information. It was suggested that mandatory training courses should be introduced for lawyers who misuse AI as a remedy to prevent future errors. In addition, concerns have been expressed that false jurisprudence not only appears in the documents of lawyers, but may even be cited by judges in their judgements, indicating that the problem may be far more influential than the community of practising lawyers。