{"id":1792,"date":"2023-12-08T11:28:28","date_gmt":"2023-12-08T03:28:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=1792"},"modified":"2023-12-08T11:28:28","modified_gmt":"2023-12-08T03:28:28","slug":"jetbrains-%e6%8e%a8%e5%87%ba%e6%96%b0-ai-%e7%bc%96%e7%a0%81%e5%8a%a9%e6%89%8b%ef%bc%8c%e7%bb%93%e5%90%88%e5%a4%9a%e4%b8%aa%e5%a4%a7%e5%9e%8b%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b%e4%bb%a5%e5%ae%9e","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/1792.html","title":{"rendered":"JetBrains launches new AI coding assistant that combines multiple large language models to achieve vendor neutrality"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/jetbrains\" title=\"[See articles with [Jetbrains] label]\" target=\"_blank\" >JetBrains<\/a>\u00a0Released on Wednesday local time<strong>A new <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a> Coding Assistant<\/strong>, the assistant is able to take information from the developer&#039;s integrated development environment (IDE) and feed it back to the AI software to provide coding suggestions, code refactoring and documentation support. The development tool company claims that<strong>Its AI assistant is<span class=\"spamTxt\">First<\/span>The first vendor-neutral product of its kind because it uses multiple large language models rather than relying on a single AI platform<\/strong>.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-1793\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/12\/6383762764300484127844662.jpg\" alt=\"\" width=\"1000\" height=\"928\" \/><\/p>\n<p>\u201cWe call ourselves neutral because you\u2019re not locked into a large language model,\u201d said Jodie Burchell, data science developer advocate at JetBrains. \u201cIt also means you\u2019re not locked into a large language model that might become outdated and might not be able to keep up.\u201d<span class=\"spamTxt\">up to date<\/span>model of development.<\/p>\n<p><strong>The foundation of the platform includes LLMs from OpenAI and Google, as well as some smaller LLM models created by JetBrains<\/strong>Different models are used for different tasks. Code generation, for example, might be straightforward and therefore easier to solve than questions like, \u201cWhy did this bug happen?\u201d That requires the more nuanced language understanding that larger models provide, Burchell said.<strong>JetBrains\u2019 AI services automatically handle which model gets which query. By using the AI service architecture, JetBrains can connect new models without requiring users to change AI providers.<\/strong><\/p>\n<p>Burchell said:<strong>If you have a simpler task, you can use a smaller [model], maybe one of our large language models internally. If you need to do something really complex, you\u2019re going to have to use GPT.<\/strong>.<\/p>\n<p>The third-party LLM companies don\u2019t store the data, although the information is shared to create hints for the underlying large language models, she said.<\/p>\n<p>Burchell said:<strong>Our agreement with the vendors is that they cannot store the data. They cannot use it for anything other than generating model outputs and reviewing it.<\/strong>\u2014 so it\u2019s basically a security check.\u201d She also said: \u201cWe would never work with third-party vendors who have these kinds of agreements where they can use the data for anything other than that.\u201d<\/p>\n<p>Developers aren\u2019t particularly keen on handing over all their coding to AI assistants, a recent JetBrains survey found<strong>Only 17% respondents said they would be willing to delegate code creation to an AI assistant<\/strong>However, when it comes to helping, 56% respondents said they would let an AI assistant write code comments and documentation if given the chance.<\/p>\n<p>So far, the service is only available to paying customers because of the high cost of using large language models under the hood, Burchell said. The plan is to roll it out to other products, but for now,<strong>AI assistant is available as a subscription add-on for most IDEs, including IntelliJ IDEA, PyCharm, PhpStorm, and ReSharper<\/strong>Exceptions are community-licensed IDEs and users with educational, startup, or open source licenses.<\/p>\n<p>One possibility that JetBrains and other AI assistants cannot currently achieve is to offer such a solution as an on-premises version, relying on internal models and running on a company\u2019s local servers. Some companies are interested in this approach because of its security advantages.<\/p>\n<p>\u201cObviously, that\u2019s going to be what you get in terms of security,\u201d Burchell said.<span class=\"spamTxt\">most<\/span>\u201cIt\u2019s a trade-off, but it would be a possibility that we are actively exploring,\u201d she said.<\/p>\n<p>The reason serving local models is difficult is simply because AI runs on very large neural networks.<\/p>\n<p>\u201cWhat that means is that building a model is obviously one thing \u2014 [it] takes a long time \u2014 but actually running the model and getting answers in real time is a major engineering problem,\u201d she explained.<\/p>\n<p>&quot;The model has to take in a lot of context, such as a piece of code that&#039;s run through the model to predict what comes next,&quot; she explained. &quot;It then runs the new prediction along with the previous information through the model to predict the next part of the sequence. It does this continuously, in real time, while doing all the network and security checks.&quot;<\/p>\n<p>\u201cBehind the scenes, what needs to happen from an engineering perspective is pretty amazing,\u201d Burchell said. \u201cIt requires GPUs, and it requires using GPUs in a cost-effective way. So if you have enough compute power to solve the problem, it\u2019s not necessarily an insurmountable problem.\u201d<strong>But being able to do this in a way that is not overly expensive is difficult.<\/strong>.<\/p>\n<p>An interesting side effect of this process is distillation. This means that as the model evolves, the neural network needed to run it gets smaller over time because it is able to provide the same performance with fewer parameters, she said.<\/p>\n<p>Vendors are actively working to address the challenges of LLMs, but that&#039;s one reason JetBrains is working with third-party vendors: Doing so helps keep costs down and enables JetBrains to offer an affordable product, Burchell said. The company is also exploring open source alternatives.<\/p>\n<p>Burchell said:<strong>Some people have expressed concerns about bundling AI systems into their IDEs. To accommodate these developers, JetBrains has introduced the ability to disable AI.<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>JetBrains released a new AI coding assistant on Wednesday local time that takes information from a developer's integrated development environment (IDE) and feeds it back to the AI software to provide coding advice, code refactoring, and documentation support. The development tools company claims that its AI assistant is the first vendor-neutral product of its kind because it uses multiple large-scale language models rather than relying on a single AI platform. We call ourselves vendor-neutral because you're not locked into one large language model,\" said Jodie Burchell, data science developer advocate at JetBrains. That also means you're not locked into a model that might be outdated, that might not be able to keep up with the latest developments,\" said Jodie Burchell, a data science developer advocate at JetBrains.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,598,597],"collection":[],"class_list":{"0":"post-1792","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-news","7":"tag-ai","9":"tag-jetbrains"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/1792","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=1792"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/1792\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=1792"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=1792"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=1792"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=1792"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}