This post is also available in:
עברית (Hebrew)
OpenAI has introduced a feature for its ChatGPT platform called Deep Research, powered by the latest o3 reasoning model. This new tool is designed to tackle complex, multi-step research tasks quickly and efficiently, offering users the ability to complete what would typically take hours of work in just minutes.
According to OpenAI, the Deep Research agent can analyze and synthesize vast amounts of information in a fraction of the time it would take a human to do the same. Users can input complex queries, and the system would provide results in the form of bullet points, tables, and the ability to integrate user-provided data. The model not only processes these queries but also cross-references multiple internet sources to verify the information, making the results more reliable.
This feature, currently available to users using ChatGPT’s paid plans, is particularly useful for professionals in fields where in-depth research is often required. After entering a query, the Deep Research agent can take anywhere from five to 30 minutes to provide comprehensive, well-structured outputs based on the complexity of the task.
In addition to Deep Research, OpenAI has launched the o3-mini model, which is optimized for tasks like web browsing and data analysis. This model promises faster response times and enhanced performance compared to its predecessor, the o1-mini. While the o3-mini excels in reasoning, science, math, and coding, it does not support visual reasoning, meaning developers will still need to use the o1 model for image-related tasks.
For developers, OpenAI has made the o3-mini API available, offering three streaming options—low, medium, and high—enabling optimization for various use cases. Notably, Plus and Team users will now be able to send up to 150 messages per day with o3-mini, compared to just 50 with the older o1-mini.
With these new advancements, OpenAI continues to push the boundaries of AI-powered research and problem-solving, empowering users to achieve more in less time.