On February 9, according to news from the Central Radio and Television General, the Chinese Voice reported today that, near Spring Festival, a different “year-by-year” approach was launched on short video platforms. A familiar movie star, a body general, a business celebrity, and even a historical figure “give their hands” and say good things on the screen. The digital economy scholar Liu Xing Liang indicates that the core of these videos is currently at the forefront of artificial intelligenceDepth synthesis technology, including two key components: visual synthesis, and speech synthesis。

Where visual synthesis is "AI face-changingOr DeepFake (Note: Deep Forgery, short “Deep Hypothetical”), which uses in-depth learning and computer visual algorithms, learns about real people's facial features, faces, mouths, etc. through large models, and then synthesizes them into new material, so that the picture looks like they're talking. And speech synthesis is..Impersonating the voice, tone or even tone of a celebrityAND MAKE THE VOICE OF AI MORE REAL. A COMBINATION OF THE TWO CAN PRODUCE AN "AI BYE-BYE" VIDEO。
Zhang Zhang, Director of the Institute of Artificial Intelligence and Law of the Chinese University of Political Science and Law, stated that the direct use of the portraits and voices of the right-holders without authorization violated the right of the person to personal integrityIncluding portraits, voice rightsI don't know. The right to sound is also protected by the right to portraits. If Synthetic SoundIt may involve causing insult, defamation and damage to the reputation of others and may constitute a violation of the right to honourI don't know. If the sound used is fromCopyright protectionOr that someone's voice has..Registered as a trademarkYeah, maybe it's a violationCopyright and trademark rightsWait. The use of the voice of others without their consent also violates the relevant provisions of administrative regulations。
In addition, the Regulation on the Depth Synthesis of Information Services on the Internet (SIS) specifies that it provides for the editing of biometric information such as face, voice, etcUsers should be advised to inform the editor in accordance with the law and obtain their individual consent.
LIU XING-RYUNG INDICATES THAT THE FORMAL, LARGE AI SERVICE PLATFORM AND SOME SPECIALIZED VIDEO SYNTHESIS TOOLS HAVE STARTED TO TRY TO ADD SOMEMechanisms to prevent abuseI DON'T KNOW. FOR EXAMPLE, BY FORCIBLY ADDING AN AI TO CREATE A LOGO OR A WATERMARK, THE VIEWER CAN TELL AT ONE GLANCE THAT THIS IS SYNTHETIC. AND IN THE USER PROTOCOLUnauthorized use of the image and voice of others is explicitly prohibitedI don't know. Some of the platforms have been set up backstageContent detection and filtering mechanisms, USE AI ITSELF TO IDENTIFY THE VIDEO OF SUSPECTED DEPTH SYNTHESIS. BUT THE TRUTH IS, THERE'S STILL A LOT OF TOOLSThere's not enough protectionIn particular, open-source models or small-scale applications may leave room for abuse without any marking or regulatory restrictions。
THE ZHAO ZHAO ZHAO ZHAO ZHAO, RESEARCHER AT THE INTELLECTUAL PROPERTY CENTRE OF THE CHINESE UNIVERSITY OF POLITICAL SCIENCE AND LAW, SAID THAT IT WAS IMPORTANT TO FULFIL ITS PRIMARY RESPONSIBILITIES AND ESTABLISH A WHOLE CHAIN MANAGEMENT MECHANISM. AI OPERATORS, PLATFORM ENTERPRISES SHOULDDo your main jobThe marking obligation under the relevant regulatory regime should be strictly implemented; easy tools should also be availablePre-notification, review and retroactivethe rights-holders shouldTimely measures such as deletion; if no corresponding measures are taken, joint and several liability should be incurred for the increased damageThe public should also raise legal awareness.