Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 3547

OpenAI Brings Early Christmas Gifts for Developers at Mini Dev Day

$
0
0
OpenAI GPT-5 Safety and Security

On the ninth day of the ‘12 Days of OpenAI,’ the startup launched the o1 model in the API, upgraded with function calling, structured outputs, reasoning effort controls, developer messages and vision inputs. 

The o1 model builds on the foundation of o1-preview, offering superior reasoning abilities, reduced latency, and improved performance.

The reasoning effort parameter lets users control the balance between response quality and speed based on task requirements. It optimises the computational effort the model uses, saving time and costs for simpler tasks while allocating more resources for more complex ones.

Developers can now leverage vision inputs within the API, allowing models to analyze and understand images—a feature expected to be useful in areas like science, manufacturing, or coding.

The live demo from OpenAI showcased the model identifying errors in a scanned form, calculating corrections, and outputting structured results in JSON format. “Structured outputs are super useful when you want the model to automatically extract data and follow a specific format, said Brian Zhang, engineer at OpenAI.

While o1 Pro for the API is not yet available, OpenAI assured developers it is actively under development.

Real-Time API Updates 

OpenAI has also revamped its Realtime API to make AI-powered interactions faster and more affordable. The updates include WebRTC support for low-latency voice integration across platforms, a 60% price cut for GPT-4o audio usage, and new features that allow developers to manage concurrent responses, control input contexts, and extend session times to 30 minutes. 

WebRTC is an open standard that simplifies the development and scaling of real-time voice products across platforms, including browser-based apps, mobile clients, IoT devices, and server-to-server connections.

Moreover, OpenAI has introduced GPT-4o Mini to the Realtime API beta, providing ultra-low costs and optimised performance for simpler tasks.

Preference Fine-Tuning 

OpenAI also unveiled a new fine-tuning method, preference fine-tuning, to improve model alignment with user preferences. This feature, utilising direct preference optimisation, allows developers to tailor models based on user feedback, improving performance in specific use cases.

Developers now can fine-tune models based on pairwise preferences, enabling subjective task optimisation for areas such as creative writing and summarisation. Preference fine-tuning is currently available for GPT-4o, with pricing aligned to OpenAI’s supervised fine-tuning costs. 

To expand its developer ecosystem, OpenAI has released beta versions of official SDKs for Go and Java. This update simplifies integration for backend systems and complements OpenAI’s existing SDKs for Python, Node.js, and .NET, making the development process more efficient.

Bonus: OpenAI has released content from its recent Dev Day conferences, now available on YouTube. The startup also announced an Ask Me Anything (AMA) session, where developers can interact with the OpenAI team and ask questions on the Developer Forum.

The post OpenAI Brings Early Christmas Gifts for Developers at Mini Dev Day appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 3547

Trending Articles