{"id":8172,"date":"2024-04-17T08:50:34","date_gmt":"2024-04-17T00:50:34","guid":{"rendered":"https:\/\/www.1ai.net\/?p=8172"},"modified":"2024-04-17T08:50:34","modified_gmt":"2024-04-17T00:50:34","slug":"openai-%e6%8e%a8%e5%87%ba-batch-%e6%89%b9%e5%a4%84%e7%90%86-api%ef%bc%9a%e5%8d%8a%e4%bb%b7%e6%8a%98%e6%89%a3%ef%bc%8c24-%e5%b0%8f%e6%97%b6%e5%86%85%e8%be%93%e5%87%ba%e7%bb%93%e6%9e%9c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/8172.html","title":{"rendered":"OpenAI launches Batch batch processing API: 50% discount, output results within 24 hours"},"content":{"rendered":"<p data-vmark=\"b9ab\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> In the early hours of this morning for<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e5%8f%91%e8%80%85\" title=\"[Sees articles with [developer] labels]\" target=\"_blank\" >Developers<\/a>Launch of Batch Batch Processing <a href=\"https:\/\/www.1ai.net\/en\/tag\/api\" title=\"_OTHER ORGANISER\" target=\"_blank\" >API<\/a>,<span class=\"accentTextColor\">Results are available within 24 hours and offer a 50% discount on APIs.<\/span><\/p>\n<p data-vmark=\"ba76\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-8173\" title=\"04a0a5f3-d7db-4774-85d6-63be2444cc11\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/04a0a5f3-d7db-4774-85d6-63be2444cc11.jpg\" alt=\"04a0a5f3-d7db-4774-85d6-63be2444cc11\" width=\"1440\" height=\"810\" \/><\/p>\n<p data-vmark=\"77c5\">The new Batch API is suitable for asynchronous task processing, such as when developers need to process a large amount of text, images, summaries, they can use this API, OpenAI will give the results within 24 hours. This allows OpenAI to process during off-peak hours, saving server resources.<span class=\"accentTextColor\">And offer developers a 50% discount to unlock higher rate limits.<\/span><\/p>\n<p data-vmark=\"8ff2\">The new Batch API supports the use of the following models:<\/p>\n<ul class=\"small-size list-paddingleft-2\">\n<li>\n<p data-vmark=\"a3d3\">gpt-3.5-turbo<\/p>\n<\/li>\n<li>\n<p data-vmark=\"2c8a\">gpt-3.5-turbo-16k<\/p>\n<\/li>\n<li>\n<p data-vmark=\"e2c3\">gpt-4<\/p>\n<\/li>\n<li>\n<p data-vmark=\"c5fe\">gpt-4-32k<\/p>\n<\/li>\n<li>\n<p data-vmark=\"4004\">gpt-4-turbo-preview<\/p>\n<\/li>\n<li>\n<p data-vmark=\"ef2a\">gpt-4-turbo<\/p>\n<\/li>\n<li>\n<p data-vmark=\"6576\">gpt-3.5-turbo-0301<\/p>\n<\/li>\n<li>\n<p data-vmark=\"a571\">gpt-3.5-turbo-16k-0613<\/p>\n<\/li>\n<li>\n<p data-vmark=\"2933\">gpt-3.5-turbo-1106<\/p>\n<\/li>\n<li>\n<p data-vmark=\"0c74\">gpt-3.5-turbo-0613<\/p>\n<\/li>\n<li>\n<p data-vmark=\"af26\">gpt-4-0314<\/p>\n<\/li>\n<li>\n<p data-vmark=\"e9ec\">gpt-4-turbo-2024-04-09<\/p>\n<\/li>\n<li>\n<p data-vmark=\"4134\">gpt-4-32k-0314<\/p>\n<\/li>\n<li>\n<p data-vmark=\"d78b\">gpt-4-32k-0613<\/p>\n<\/li>\n<\/ul>\n<p data-vmark=\"dfed\">OpenAI has already described how to use it in the API documentation.<span class=\"accentTextColor\">Includes how to create, retrieve, and cancel Batch Batches, etc.<\/span>The official introduction for developers who need it is as follows:<\/p>\n<p data-vmark=\"252b\">&#8211;\u00a0<a title=\"Batch API Examples - OpenAI API Documentation\" href=\"https:\/\/platform.openai.com\/docs\/api-reference\/batch\" target=\"_blank\" rel=\"noopener\">Watch API Example \u2013 OpenAI API Document<\/a><\/p>\n<p data-vmark=\"091b\">&#8211;\u00a0<a title=\"Batch API FAQs | OpenAI Help Center\" href=\"https:\/\/help.openai.com\/en\/articles\/9197833-batch-api-faq\" target=\"_blank\" rel=\"noopener\">Batch API FAQs | OpenAI Help Center<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI launched the Batch batch processing API for developers early this morning, giving results within 24 hours and offering a 50% discount on the API. The new Batch API is suitable for asynchronous task processing, such as when developers need to process a large amount of text, images, summaries, they can use the API, OpenAI will give the results within 24 hours. This allows OpenAI to process during off-peak hours, conserving server resources and providing developers with a 50% discount to unlock higher rate limits. The new Batch API supports the use of the following models: gpt-3.5-turbo gpt-3.5-turbo-16k gpt-4 gpt-4-32k<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1033,190,1903],"collection":[],"class_list":["post-8172","post","type-post","status-publish","format-standard","hentry","category-news","tag-api","tag-openai","tag-1903"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/8172","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=8172"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/8172\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=8172"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=8172"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=8172"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=8172"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}