{"id":10221,"date":"2024-05-13T09:29:10","date_gmt":"2024-05-13T01:29:10","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10221"},"modified":"2024-05-13T09:29:10","modified_gmt":"2024-05-13T01:29:10","slug":"ai%e4%b8%80%e9%94%ae%e6%8d%a2%e8%a3%85%e8%bd%af%e4%bb%b6%ef%bc%8cidm-vton%e5%b7%b2%e7%bb%8f%e5%8f%af%e4%bb%a5%e6%9c%ac%e5%9c%b0%e7%94%b5%e8%84%91%e7%9b%b4%e6%8e%a5%e9%83%a8%e7%bd%b2%e5%ae%89%e8%a3%85a","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10221.html","title":{"rendered":"AI one-click dressing software, IDM-VTON can already deploy and install AI fitting software on local computers"},"content":{"rendered":"<p data-track=\"2\" data-pm-slice=\"0 0 []\">This software allows you to easily change clothes with just one click.<\/p>\n<p data-track=\"4\">Previously introduced<a href=\"https:\/\/www.1ai.net\/en\/9963.html\/\">Use AI to realize virtual fitting, and change clothes online with just photos<\/a>, briefly demonstrated online use.<\/p>\n<p data-track=\"6\">Today, let\u2019s demonstrate how to install it on your local computer!<\/p>\n<p data-track=\"8\">Local configuration will require a certain degree of professionalism, and those who understand it can refer to it.<\/p>\n<p data-track=\"10\">If you don\u2019t know how to do it, just scroll to the end and move your fingers!<\/p>\n<p data-track=\"12\">I&#039;ll send you a finished one.<strong>Offline packages<\/strong>.<\/p>\n<p data-track=\"14\">Without further ado, let\u2019s get started!<\/p>\n<p data-track=\"16\">Make sure you have a local graphics card (I use 3090), Windows system, and have installed Python or Conda, GIT and other software.<\/p>\n<p data-track=\"18\"><strong>1. Get the software source code<\/strong><\/p>\n<p data-track=\"20\">This is an open source project, so you can use git to go directly to the source code.<\/p>\n<pre><code>git clone https:\/\/github.com\/yisol\/IDM-VTON.git<\/code><\/pre>\n<p data-track=\"24\"><strong>2. Create a virtual environment<\/strong><\/p>\n<p data-track=\"26\">If you only use Python occasionally, you can just install Python 3.10. This step is not necessary.<\/p>\n<p data-track=\"28\">If you often play with Python AI projects, you must know how to use Conda to create a virtual environment to isolate different projects.<\/p>\n<p data-track=\"30\">The general practice is to use the environment file that comes with the project to create a virtual environment and install dependencies at one time.<\/p>\n<p data-track=\"32\">The command is as follows:<\/p>\n<pre><code>conda env create -f environment.yaml<\/code><\/pre>\n<p data-track=\"35\">However, this method has a high failure rate.<\/p>\n<p data-track=\"37\">In fact, I also failed using this command directly.<\/p>\n<p data-track=\"39\">So, I changed it to create a virtual environment first.<\/p>\n<pre><code>conda create -n idm python=3.10 conda activate idm<\/code><\/pre>\n<p data-track=\"43\">After creating the virtual environment, remember to activate it.<\/p>\n<p data-track=\"46\"><strong>3. Install dependencies<\/strong><\/p>\n<p data-track=\"48\">After the virtual environment is created and activated, you can use the pip command to install dependencies.<\/p>\n<p data-track=\"50\">I usually install torch independently first.<\/p>\n<pre><code>cd IDM-VTON pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https:\/\/download.pytorch.org\/whl\/cu118<\/code><\/pre>\n<p data-track=\"54\">Then batch install other dependencies:<\/p>\n<pre><code>pip install -r req.txt<\/code><\/pre>\n<p data-track=\"57\">req.txt is a file I created myself, the contents of the file are:<\/p>\n<pre><code>accelerate==0.25.0 torchmetrics==1.2.1 tqdm==4.66.1 transformers==4.36.2 diffusers==0.25.0 einops==0.7.0 bitsandbytes==0.39.0 scipy==1.11.1 opencv- python gradio==4.24.0 fvcore cloudpickle omegaconf pycocotools basicsr av onnxruntime==1.16.2<\/code><\/pre>\n<p data-track=\"61\">After the execution is completed, all dependencies are installed. From my local test, all dependencies and specified versions can be installed normally without any conflicts or red records.<\/p>\n<p data-track=\"63\"><strong>4. Get the model<\/strong><\/p>\n<p data-track=\"65\">Through the above steps, the environment configuration has been completed, and the next step is to obtain the model.<\/p>\n<p data-track=\"67\">I can simply divide the models into two categories, one is the basic model, the other is the exclusive model for this project.<\/p>\n<p data-track=\"69\">Let\u2019s talk about the exclusive model first, and pay attention to the following directory structure.<\/p>\n<pre><code>ckpt |-- densepose |-- model_final_162be9.pkl |-- humanparsing |-- parsing_atr.onnx |-- parsing_lip.onnx |-- openpose |-- ckpts |-- body_pose_model.pth<\/code><\/pre>\n<p data-track=\"73\">The project author has created these files and models for you. However, the model files are just for show and have no actual content, so you need to download and replace them yourself.<\/p>\n<p data-track=\"75\">The basic model will be automatically downloaded when you start it for the first time. It is about 28G<\/p>\n<p data-track=\"77\">To get these models, you can use this address:<\/p>\n<pre><code>https:\/\/huggingface.co\/yisol\/IDM-VTON\/tree\/main<\/code><\/pre>\n<p data-track=\"80\">For exclusive models, find the model in the corresponding folder and download it to the corresponding local path.<\/p>\n<p data-track=\"82\">For other models, I use the following code to load it automatically.<\/p>\n<pre><code>set HF_HOME=.\\cache python gradient_demo\/app.py<\/code><\/pre>\n<p data-track=\"86\">Please note that the first line of code is very important to prevent your C drive from exploding!<\/p>\n<p data-track=\"88\">The second line of code actually starts the webui. Since the model will be loaded first during the startup process, this can be used to download the model.<\/p>\n<p data-track=\"90\">Since the model is relatively large, even with a good network, it will take some time.<\/p>\n<p data-track=\"92\"><strong>5. Run WEBUI<\/strong><\/p>\n<p data-track=\"94\">After the model is loaded, WEBUI will be started automatically. If you want to start it manually in the future, just execute the same command.<\/p>\n<pre><code>set HF_HOME=.\\cache python gradient_demo\/app.py<\/code><\/pre>\n<p data-track=\"98\">After entering the command, some content will be output. If you see the following information, it basically starts successfully.<\/p>\n<pre><code>Running on local URL: http:\/\/127.0.0.1:7860<\/code><\/pre>\n<p data-track=\"102\">Copy the address above, open it in your browser, and have fun.<\/p>\n<p data-track=\"104\">The interface after opening is exactly the same as the web version introduced yesterday.<\/p>\n<p data-track=\"106\">So the usage is the same, for the sake of readability, I will briefly introduce the usage.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10223\" title=\"get-198\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-198.jpg\" alt=\"get-198\" width=\"1079\" height=\"821\" \/><\/div>\n<p data-track=\"109\">The general steps are: \u2460 model photo, \u2461 upload the clothes, and click to try them on.<\/p>\n<p data-track=\"111\">For clothing photos, it is natural to choose the cleanest flat photos. For models, as long as the clothes and other content are easy to distinguish, it will be fine.<\/p>\n<p data-track=\"113\"><strong>6. Run offline<\/strong><\/p>\n<p data-track=\"115\">In theory, after the local configuration is completed, the model has been cached locally. The software should be able to run offline. However, since the model is loaded through huggingface in the code, it still needs to be loaded online even if there is a local cache.<\/p>\n<p data-track=\"117\">To solve this problem, I tried using offline variables.<\/p>\n<pre><code>set HF_HUB_OFFLINE=1 set TRANSFORMERS_OFFLINE=1<\/code><\/pre>\n<p data-track=\"121\">But I don&#039;t know why, it keeps going wrong.<\/p>\n<p data-track=\"123\">Later, I thought of a solution that was to directly copy the downloaded model cache file to another folder.<\/p>\n<p data-track=\"124\">A small discovery is that by copying the soft link in the cache file, you can directly copy the corresponding file.<\/p>\n<p data-track=\"126\">Then modify the code in app.py:<\/p>\n<pre><code>base_path = &#039;cache\\IDM-VTON&#039;<\/code><\/pre>\n<p data-track=\"129\">After the modification is completed, the local model can be loaded completely offline.<\/p>\n<p data-track=\"131\"><strong>7. Offline one-click run package<\/strong><\/p>\n<p data-track=\"133\">The installation and configuration are all covered if you are good at configuration, and the local network is smooth and the hard disk reads and writes quickly.<\/p>\n<p data-track=\"135\">Then just copy the command to quickly complete the configuration.<\/p>\n<p data-track=\"137\">But for most people, most situations may not be so smooth.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10222\" title=\"get-197\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-197.jpg\" alt=\"get-197\" width=\"1079\" height=\"594\" \/><\/div>\n<p data-track=\"140\">For example, on my computer, it took me<strong>2 hours<\/strong>. After packing 20GB, it takes a lot of time to upload it to the network disk!<\/p>\n<p data-track=\"142\">The previous configuration and trial and error time will definitely be more.<\/p>\n<p data-track=\"144\">Therefore, it is very necessary to make a packaged software package. It will save time for me in the future and for everyone.<\/p>\n<p data-track=\"146\">The following is a brief introduction on how to use the software package.<\/p>\n<p data-track=\"148\">First get the software package, after getting it you will get a<strong>.7z<\/strong>The compressed package.<\/p>\n<p data-track=\"150\">Then use the decompression software to decompress it. You need to enter a password during the decompression process. The password can be found in the instructions in the network disk.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10224\" title=\"get-199\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-199.jpg\" alt=\"get-199\" width=\"630\" height=\"1226\" \/><\/div>\n<p data-track=\"153\">Double click after decompression is complete.\"<strong>Launch .exe<\/strong>\"\u3002<\/p>\n<p data-track=\"155\">The startup process is as follows:<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10225\" title=\"get-200\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-200.jpg\" alt=\"get-200\" width=\"1080\" height=\"751\" \/><\/div>\n<p data-track=\"158\">First there will be a prompt message, and then wait until the URL information pops up, and then the local default browser will automatically open after it pops up.<\/p>\n<p data-track=\"160\">You can see the following interface on the browser:<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10226\" title=\"get-200\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-200.jpg\" alt=\"get-200\" width=\"1080\" height=\"751\" \/><\/div>\n<p data-track=\"163\">The usage is exactly the same as before, upload the model photo and the clothes photo, click to try it on. Wait for a while, and you can see the final effect picture in the upper right corner.<\/p>\n<p data-track=\"165\">The waiting time will vary greatly depending on the computer configuration.<\/p>\n<p data-track=\"167\">For example, I'm in a computer with a 3060 card, and it's like a few hours to run a picture... and I feel like I'm using shared memory when loading a model, or not having enough resources, so it's very slow\u3002<\/p>\n<p data-track=\"169\">But on a computer with a 3090 graphics card, it was quite fast, taking 54 seconds for the first time.<strong>The second time only took 14 seconds<\/strong>.<\/p>\n<p data-track=\"171\">Once every 14 seconds is well within the acceptable range.<\/p>\n<p data-track=\"173\">That\u2019s about it for today\u2019s article. If you have any difficulties with installation and configuration, you can communicate with each other.<br \/>\nIf you use the software package directly, there should be no problem, as long as your computer configuration is strong enough.<\/p>\n<p>Download link: https:\/\/pan.baidu.com\/s\/1hF8JLgRLFtD7L5_QE5xA_g?pwd=tony<\/p>","protected":false},"excerpt":{"rendered":"<p>Through this software you can easily realize a key to change clothes. Previously, we introduced the virtual fitting with AI, and you can change clothes online with just photos, and briefly demonstrated how to use it online. Today, to demonstrate how to install to the local computer! Local configuration will have a certain degree of professionalism, those who know can refer to it. If you don't know how to do it, you can just pull it to the end and move your fingers! I will send a made offline package. Without further ado, let's get right to it! Make sure you have a local graphics card (I'm using a 3090), Windows, and Python or Conda, GIT, etc. installed. 1. Get the software source code This is an open source project, so you can use the git command to go directly to the source code. git clone https:\/\/gi<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[2569,2524,2568],"collection":[],"class_list":["post-10221","post","type-post","status-publish","format-standard","hentry","category-jiaocheng","category-baike","tag-ai","tag-idm-vton","tag-2568"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10221","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10221"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10221\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10221"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10221"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10221"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10221"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}