Chatgpt integrates with development apps on macOS | ChatGPT OpenAI | OpenAI API | Chat GPT gratis | Turtles AI
ChatGPT is introducing a new functionality that allows its assistant to interact with some coding apps on macOS, improving the workflow for developers. This update, which uses Apple’s accessibility bees, allows chatgpt to read the code in environments such as VS Code, Xcode and Terminal, offering more fluid and contextualized support.
Key points:
- ChatGPT now interacts with development apps on macOS such as Xcode and VS Code.
- The functionality is based on the accessibility of Apple and allows you to read the code without the need to copy and glue it.
- Currently, Chatgpt does not write code directly in the developers’ apps, but provides suggestions and completions.
- OpenAI plans to expand this functionality to other apps and operating systems, with focus on writing tools and agents AI.
With the latest update, OpenAI brings a significant step forward in its integration of chatgpt with desktop applications, in particular those dedicated to programming on macOS. The new functionality, known as "Work With Apps", allows Chatgpt to read the code directly from development environments such as Xcode, Visual Studio Code, TexTedit, Terminal and Iterm2, saving the developers the copy-paste process of the code. In practice, once enabled, this function allows chatgpt to automatically obtain the last lines of code on which the user is working, transforming them into a useful context to answer questions or suggest changes, all without the need for manual interaction. This represents an improvement compared to previous versions, where the developers were forced to manually transfer their work within the chatbot interface to obtain suggestions or corrections.
Despite the progress, the functionality does not go to the point of allowing ChatGPT to write code directly in the developers’ apps. In practice, ChatGPT analyzes and responds to the user’s requests, providing useful suggestions as completions of code or explanations, but the manual intervention to integrate the solution in the project remains necessary. This approach differs from competing solutions, such as Github Copilot or Cursor, which are already able to write code directly within the text editors. However, according to OpenAI, the true innovation of "Work With Apps" lies in the possibility of having a chatbot that includes and contextualizes the work that is being done in real time.
To guarantee this fluid interaction, OpenAI mainly relies on the macOS accessibility, which uses the Voiceover system to read and transfer the text from other applications. Although this method is reliable in most cases, some apps could need special extensions to allow chatgpt to access the content, as in the case of vs code, which requires the installation of a specific plugin. The technology, however, has limits, as it is based exclusively on the reading of text: therefore, ChatGPT is unable to understand visual content such as images or graphics, nor can it interpret the orientation of objects on the screen.
The function initially focuses on coding environments, a sector that has seen a rapid adoption of assistants AI, but OpenAI has already indicated that the area of use of "Work With Apps" could extend to other categories of applications, in particular those dedicated to the writing of texts, thus opening the way to a further evolution of the chatgpt assistance capacity. In a demonstration made with Techcrunch, an employee of OpenAI showed how chatgpt was able to complete a simple project on Xcode, suggesting the code line to add the earth to a model of the solar system. Although the intervention of the chatbot has been useful, the automatic integration of the code within the development environment still requires the action of the user, an aspect that distinguishes this functionality from those of other AI solutions.
Another important aspect of this update is its orientation towards the creation of an "Agent AI", that is, an AI capable of understanding and interacting with all applications on the device. This goal, which represents one of the main focus of OpenAI for the next few years, aims to develop systems that do not just respond to commands, but that are able to navigate the entire user ecosystem, taking into consideration visible information throughout the screen. Although this function is still far away, it represents a fundamental step for the evolution of digital assistants, towards a more complex and dynamic understanding of the contexts in which they operate.
At the moment, the "Work With Apps" function is available for Plus and Teams users, with the promise to extend to the Enterprise and Education versions in the weeks to come. Compatibility with other platforms, such as Windows, is not yet clear, but the update is expected to arrive on this operating system in the future, especially considering the imminent launch of a collaboration between OpenAI and Apple.
This development marks another step in the integration process between the AI and the daily work tools, intended to further simplify the workflow for programming professionals and, in the future, also for those who deal with writing.