{"id":21826,"date":"2024-10-23T09:51:04","date_gmt":"2024-10-23T01:51:04","guid":{"rendered":"https:\/\/www.1ai.net\/?p=21826"},"modified":"2024-10-23T09:51:04","modified_gmt":"2024-10-23T01:51:04","slug":"jetbrains-%e4%b8%ba%e5%bc%80%e5%8f%91%e8%80%85%e6%89%93%e9%80%a0%e6%9c%80%e5%bc%ba-ai-%e5%8a%a9%e6%89%8b-mellum%ef%bc%9a%e4%b8%ba%e7%bc%96%e7%a8%8b%e8%80%8c%e7%94%9f%ef%bc%8c%e5%bb%b6%e8%bf%9f","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/21826.html","title":{"rendered":"JetBrains Builds the Most Powerful AI Assistant for Developers Mellum: Built for Programming, Low Latency, Fast Completion, High Accuracy"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/jetbrains\" title=\"[See articles with [Jetbrains] label]\" target=\"_blank\" >JetBrains<\/a> The company released a blog post yesterday (October 22) dedicated to the design and launch of the new<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large language model]]\" target=\"_blank\" >Large Language Model<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/mellum\" title=\"[See article with [Mellum] label]\" target=\"_blank\" >Mellum<\/a>,<strong>because of<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%bd%af%e4%bb%b6%e5%bc%80%e5%8f%91%e8%80%85\" title=\"_Other Organiser\" target=\"_blank\" >software developer<\/a>Provides faster, smarter and more context-aware code completion.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-21827\" title=\"076aaa3cj00slsd42005wd000gg00gzp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/10\/076aaa3cj00slsd42005wd000gg00gzp.jpg\" alt=\"076aaa3cj00slsd42005wd000gg00gzp\" width=\"592\" height=\"611\" \/><\/p>\n<p>Officially, the biggest highlight of Mellum compared to other large language models is that it is designed specifically for developers to program with low latency, strong performance, and comprehensive functionality to provide developers with relevant advice in the shortest possible time.<\/p>\n<p>Mellum already supports popular programming languages such as Java, Kotlin, Python, Go, and PHP, and users can gain access to additional language support by joining the Early Access Program.<\/p>\n<p>Mellum states that patching code latency is one-third of what it was before, dramatically increasing the speed of task completion; the acceptance rate of completed recommendations is approximately 40%, making it a reliable benchmark in the industry.<\/p>\n<p>Not only does Mellum excel in speed and accuracy, it also has deep integration with the JetBrains IDE, allowing it to provide contextual code suggestions that fit the needs of your project.<\/p>\n<p>JetBrains is committed to ensuring user privacy by sourcing Mellum's training data only from publicly available code with a liberal license.<\/p>\n<p>According to publicly available information, JetBrains is a leading provider of software development tools focused on creating intelligent development tools. The company develops a wide range of integrated development environments (IDEs), including IntelliJ IDEA, PyCharm, and the Kotlin programming language, and its tools are trusted and used by more than 11.4 million professionals and 88 Fortune Global 100 companies worldwide.<\/p>","protected":false},"excerpt":{"rendered":"<p>JetBrains released a blog post yesterday (October 22), specifically designed to launch a new large language model, Mellum, to provide software developers with faster, smarter and more context-aware code completion. Officially, the biggest highlight of Mellum compared to other big language models is that it is designed specifically for developers to program with low latency, high performance, and comprehensive functionality to provide developers with relevant advice in the shortest possible time. Mellum already supports popular programming languages such as Java, Kotlin, Python, Go, and PHP, and users can get support for more languages by joining the Early Access Program. Mellum says the latency for code completion is one-third of what it was before.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[597,4701,706,4702],"collection":[],"class_list":["post-21826","post","type-post","status-publish","format-standard","hentry","category-news","tag-jetbrains","tag-mellum","tag-706","tag-4702"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/21826","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=21826"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/21826\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=21826"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=21826"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=21826"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=21826"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}