{"id":11433,"date":"2024-05-27T14:52:44","date_gmt":"2024-05-27T06:52:44","guid":{"rendered":"https:\/\/www.1ai.net\/?p=11433"},"modified":"2024-05-27T14:53:40","modified_gmt":"2024-05-27T06:53:40","slug":"%e8%83%bd%e8%87%aa%e5%8a%a8%e5%8c%96%e8%a7%86%e9%a2%91%e5%89%aa%e8%be%91%e7%9a%84%e5%bc%80%e6%ba%90%e5%b7%a5%e5%85%b7%ef%bc%8cfunclip%e6%9c%ac%e5%9c%b0%e9%83%a8%e7%bd%b2%e5%92%8c%e7%ba%bf%e4%b8%8a","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/11433.html","title":{"rendered":"An open source tool that automates video editing, FunClip local deployment and online experience"},"content":{"rendered":"<p data-pm-slice=\"0 0 []\">In recent years<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%9f%ad%e8%a7%86%e9%a2%91\" title=\"[View articles tagged with [short video]]\" target=\"_blank\" >Short Video<\/a>It is very popular, especially with the rise of short video platforms such as Tik Tok. Many people can post their daily life or work online, attracting a lot of attention, and some people have made their first pot of gold in life.<\/p>\n<p data-track=\"40\">However, video editing is a very time-consuming task, and it often takes several hours to edit a video.<\/p>\n<p data-track=\"41\">Today I recommend you a<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c\" title=\"[View articles tagged with [Ali]]\" target=\"_blank\" >Ali<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Automation<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%a7%86%e9%a2%91%e5%89%aa%e8%be%91%e5%b7%a5%e5%85%b7\" title=\"[Sees articles with tags]\" target=\"_blank\" >Video Editing Tools<\/a>\u2014<a href=\"https:\/\/www.1ai.net\/en\/tag\/funclip\" title=\"_Other Organiser\" target=\"_blank\" >FunClip<\/a>, which can help everyone edit videos easily.<\/p>\n<p data-track=\"42\">FunClip is a completely open source automatic video editing tool that can be installed on our own computers and supports offline use. You can also use the FunASR Paraformer series model open sourced by Alibaba Tongyi Lab to perform speech recognition in the video. Then you can freely select the text segment or speaker in the recognition result and click the crop button to get the video of the corresponding segment.<\/p>\n<p data-track=\"43\">So use <strong>FunClip <\/strong>Video editing is very simple. Unlike traditional video editing software, we don\u2019t need to manually split the video.<\/p>\n<p data-track=\"44\">Based on the above basic functions, FunClip has the following features:<\/p>\n<ul>\n<li data-track=\"45\">FunClip integrates the calling mechanisms of many advanced language models and opens up flexible prompt setting functions, aiming to explore new methods for video editing using large language models.<\/li>\n<li data-track=\"46\">FunClip uses Alibaba&#039;s open source top industrial-grade speech recognition model - Paraformer-Large, which performs well among open source Chinese ASR models. Modelscope has been downloaded more than 13 million times and can accurately predict timestamps.<\/li>\n<li data-track=\"47\">In addition, FunClip has also integrated SeACo-Paraformer&#039;s hot word customization function, which can specifically specify entity words, names, etc. as hot words during the speech recognition process, thereby significantly improving the recognition accuracy.<\/li>\n<li data-track=\"48\">FunClip is also equipped with CAM++&#039;s speaker recognition model. Users can use the automatically identified speaker ID as the basis for editing and easily crop out the part of a specific speaker.<\/li>\n<li data-track=\"49\">Through Gradio&#039;s interactive interface, users can easily achieve the above functions. The installation process is simple and easy to operate. It also supports deployment on the server and operation through web pages.<\/li>\n<li data-track=\"50\">FunClip also supports free editing of multiple videos, and can automatically generate complete video SRT subtitle files and SRT subtitles for target clips, simplifying the entire editing process.<\/li>\n<\/ul>\n<p data-track=\"51\">In addition, FunClip has added the intelligent cropping function of large language models, integrated models such as the qwen series and gpt series, provided a default prompt, and we can also train our own prompt. It also supports the SeACo-Paraformer model open sourced by FunASR to further support hot word customization in video editing.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11434\" title=\"get-656\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-656.jpg\" alt=\"get-656\" width=\"1988\" height=\"1658\" \/><\/div>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"52\">Installation \ud83d\udd28<\/h1>\n<p data-track=\"53\"><strong>Install Python environment<\/strong><\/p>\n<p data-track=\"54\">The operation of FunClip only depends on the Python environment, so we only need to set up the Python environment.<\/p>\n<pre><code># Clone funclip repository git clone https:\/\/github.com\/alibaba-damo-academy\/FunClip.git cd FunClip # Install related Python dependencies pip install -r .\/requirements.txt<\/code><\/pre>\n<pre><code>Install imagemagick If you want to use the video cropping function that automatically generates subtitles, you need to install imagemagick:<\/code><\/pre>\n<pre><code>Ubuntuapt-get -y update &amp;&amp; apt-get -y install ffmpeg imagemagick sed -i &#039;s\/none\/read,write\/g&#039; \/etc\/ImageMagick-6\/policy.xml\r\n<\/code><\/pre>\n<ul>\n<li data-track=\"58\">MacOS<\/li>\n<\/ul>\n<pre><code>brew install imagemagick sed -i &#039;s\/none\/read,write\/g&#039; \/usr\/local\/Cellar\/imagemagick\/7.1.1-8_1\/etc\/ImageMagick-7\/policy.xml<\/code><\/pre>\n<ul>\n<li data-track=\"60\">Windows<\/li>\n<\/ul>\n<p data-track=\"61\">You need to download and install imagemagick from the following address:<\/p>\n<pre><code>https:\/\/imagemagick.org\/script\/download.php#windows<\/code><\/pre>\n<pre><code>  To use FunClip, first start FunClip using the following command: python funclip\/launch.py<\/code><\/pre>\n<p data-track=\"64\">Then visit localhost:7860 in your browser to enter the homepage.<\/p>\n<p data-track=\"65\">Then follow the steps below to edit the video:<\/p>\n<pre><code>Upload your video (or use the video example below)<\/code><\/pre>\n<ul>\n<li data-track=\"67\">Set hot words and file output path (to save recognition results, videos, etc.)<\/li>\n<li data-track=\"68\">Click the Identify button to get the recognition result, or click Identify + Distinguish Speakers to identify the speaker ID based on speech recognition.<\/li>\n<li data-track=\"69\">Copy the segment in the recognition result to the corresponding position, or enter the speaker ID until it reaches the corresponding position.<\/li>\n<li data-track=\"70\">Configure clip parameters, offset and subtitle settings, etc.<\/li>\n<li data-track=\"71\">Click the &quot;Crop&quot; or &quot;Crop+Subtitles&quot; button<\/li>\n<\/ul>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11437\" title=\"get-659\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-659.jpg\" alt=\"get-659\" width=\"2690\" height=\"1754\" \/><\/div>\n<p class=\"pgc-p\">Please refer to the following tutorial for using large language model clipping:<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11435\" title=\"get-657\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-657.jpg\" alt=\"get-657\" width=\"1334\" height=\"1502\" \/><\/div>\n<p data-track=\"74\">FunClip has deployed online services in the Mota community, which can be experienced at the following address:<\/p>\n<p data-track=\"75\">https:\/\/modelscope.cn\/studios\/iic\/funasr_app_clipvideo\/summary<\/p>\n<p data-track=\"76\">We can upload our own video or audio, or use the demo provided by FunClip:<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11436\" title=\"get-658\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-658.jpg\" alt=\"get-658\" width=\"1078\" height=\"642\" \/><\/div>\n<p data-track=\"77\">For more information, please visit github:<\/p>\n<p data-track=\"78\">https:\/\/github.com\/alibaba-damo-academy\/FunClip<\/p>\n<p data-track=\"79\">In general, FunClip is a completely open source, locally deployed automated video editing tool that can help us edit videos more easily and record our beautiful life.<\/p>","protected":false},"excerpt":{"rendered":"<p>Short videos have been very hot in the past few years, especially the rise of short video platforms such as Jitterbug, and many people can post their daily life or work online, attracting a lot of attention, and there are some people who have made the first bucket of money in their lives as a result. Then editing the video is a very labor-intensive thing, often take a few hours to cut out a video. Today we recommend an Ali open source automated video editing tool - FunClip, can help you easily edit the video. FunClip is a completely open source automated video editing tool, can be installed on our own computer, support for offline use. It can also be used for speech recognition of the video by calling the open source FunASR Paraformer series of models from Alibaba Tongyi Labs, and then<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[2632,219,1007,2176,1759],"collection":[],"class_list":["post-11433","post","type-post","status-publish","format-standard","hentry","category-jiaocheng","category-baike","tag-funclip","tag-219","tag-1007","tag-2176","tag-1759"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/11433","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=11433"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/11433\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=11433"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=11433"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=11433"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=11433"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}