{"id":7220,"date":"2024-04-05T10:26:20","date_gmt":"2024-04-05T02:26:20","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7220"},"modified":"2024-04-05T10:26:20","modified_gmt":"2024-04-05T02:26:20","slug":"openai-%e6%96%b0%e5%8a%a8%e6%80%81%ef%bc%9a%e6%94%b9%e5%96%84%e5%be%ae%e8%b0%83-api%ef%bc%8c%e6%89%a9%e5%b1%95%e5%ae%9a%e5%88%b6%e6%a8%a1%e5%9e%8b%e8%ae%a1%e5%88%92","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7220.html","title":{"rendered":"OpenAI News: Improved fine-tuning API, expanded custom model plans"},"content":{"rendered":"<p data-vmark=\"9b09\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> The company recently issued a press release,<strong>Announcement of improved fine-tuning<a href=\"https:\/\/www.1ai.net\/en\/tag\/api\" title=\"_OTHER ORGANISER\" target=\"_blank\" >API<\/a>, and further expand<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%ae%9a%e5%88%b6%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with labels]\" target=\"_blank\" >Custom Models<\/a>plan.<\/strong><\/p>\n<p data-vmark=\"c174\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7221\" title=\"295291eb-720b-4898-a65b-8d6f4581d43c\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/295291eb-720b-4898-a65b-8d6f4581d43c.jpg\" alt=\"295291eb-720b-4898-a65b-8d6f4581d43c\" width=\"1024\" height=\"763\" \/><\/p>\n<p data-vmark=\"4f5f\">The following are the improvements to the fine-tuning API in the press release:<\/p>\n<h3 data-vmark=\"e0ef\">Epoch-based Checkpoint Creation<\/h3>\n<p data-vmark=\"ac97\">A complete checkpoint of the fine-tuned model is automatically generated during each training epoch (the training process that passes all examples in the training dataset once (and only once)), which can reduce the need for subsequent retraining, especially in cases of overfitting.<\/p>\n<h3 data-vmark=\"eecf\">Comparative Playground<\/h3>\n<p data-vmark=\"65e5\">New side-by-side playground UI for comparing model quality and performance, allowing human evaluation of the output of multiple models or fine-tuning snapshots based on a single cue word.<\/p>\n<h3 data-vmark=\"d451\">Third-party integrations:<\/h3>\n<p data-vmark=\"edaa\">Support for integration with third-party platforms (starting this week with Weights and Biases) allows developers to share detailed fine-tuning data with other parts of the stack.<\/p>\n<h3 data-vmark=\"a5bd\">More comprehensive validation metrics:<\/h3>\n<p data-vmark=\"e4d2\">Ability to calculate metrics like loss and accuracy on the entire validation dataset (rather than sampled batches), providing better insight into model quality.<\/p>\n<h3 data-vmark=\"fd33\">Hyperparameter Configuration<\/h3>\n<p data-vmark=\"b35a\">The ability to configure available hyperparameters from the dashboard (rather than only through the API or SDK)<\/p>\n<h3 data-vmark=\"af5a\">Improved fine-tuning control panel<\/h3>\n<p data-vmark=\"2ed6\">The ability to configure hyperparameters, view more detailed training metrics, and rerun jobs from previous configurations.<\/p>\n<h2 data-vmark=\"0b6c\">Expanding the Custom Model Program<\/h2>\n<p data-vmark=\"dde2\">To further expand the custom model plan, OpenAI has also launched an assisted fine-tuning service. Developers can seek help from OpenAI professional team members to train and optimize models for specific fields, attach Hyperparameters and various parameter efficient fine-tuning (PEFT) methods.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI recently issued a press release announcing improvements to the fine-tuning API and further expansion of the custom model program. The improvements to the fine-tuning API in the press release are as follows Epoch-based Checkpoint Creation Automatically generates a complete fine-tuned model checkpoint during each training epoch (the process of going through all the samples in a training dataset once (and only once)), which reduces the need for subsequent re-training, especially in the case of overfitting (i.e., matching a specific dataset too closely or precisely so as not to be able to fit the dataset well). overfitting, the phenomenon of matching a particular dataset too closely or precisely to fit other data well or to predict future observations). Comp<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1033,190,2084],"collection":[],"class_list":["post-7220","post","type-post","status-publish","format-standard","hentry","category-news","tag-api","tag-openai","tag-2084"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7220","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7220"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7220\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7220"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7220"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7220"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7220"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}