google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

OpenAI releases lower-cost models to rival Meta, Mistral and DeepSeek

Openai has released two open weight models for the first time since it launched the GPT-2 in Tuesday 2019.

Only text models are called GPT-OOS-120B and GPT-OOS-20B and are designed to serve as lower cost options that developers, researchers and companies can easily work and customize.

If the parameters or elements that improve their outputs and forecasts during training are open to the public, an artificial intelligence model is considered as open weight. Open -weighted models can offer transparency and control, but full source code is different from open source models that make it available for users to use and change.

Including several other technology companies MetaMicrosoft-Mistral AI and Chinese venture Deepseek have also published open weight models in recent years.

“It was exciting to see an ecosystem developed and we are excited to contribute and really force the limit and then see what will happen from there,” Openai President Greg Brockman told reporters during a briefing. He said.

Cooperated with the Company NvidiaAdvanced Micro DevicesTo ensure that the models work well on various chips, cerererererecs and groq.

“Openai has shown the world that they can be built on Nvidia AI-and now they advance innovation in open-source software.” Jensen Huang said in a statement.

Openai’s open -weighted models are expected to be released, because the company has repeatedly delayed the launch of the company.

One Send it to x In July, Openai CEO Sam Altman said that the company needs more time to “conduct additional security tests and review high -risk areas”. A separate post Altman weeks ago, the models will not be released in June, he said.

Openai said on Tuesday that he had conducted comprehensive security training and test on his open -weighted models.

During the preliminary training, he filtered harmful chemicals, biological, radiological and nuclear data and mimically mimic how actors could try to make fine -tuning for malicious purposes. Thanks to this test, Openai said that badly -tuned models could not reach the “high capacity” threshold within the framework of preparation, which determines that there is a method of protection against damage and damages.

Openai said that the company has worked with three independent experts who provide feedback on malicious fine -tuning assessment.

Openai, people under the Apache 2.0 license Huging Face and Github platforms such as GPT-OOS -20B and GPT-OOS-20B weights, he said. Models will work on PCs through programs such as LM Studio and Ollama. Cloud providers also make models available in Amazon, Bastenten and Microsoft.

Both models are designed to process the advanced reasoning, vehicle use and the processed processing chain and work everywhere from consumer equipment to cloud, to the device applications.

Users, for example, can run GPT-Ooss-20B on a laptop and use it as a personal assistant who can search and write through files.

Altman, said in a statement on Tuesday, “This model, as a result of billions of dollars of research, artificial intelligence is excited to make the most as possible to get the most possible people.” He said.

Jordan Novet from CNBC contributed to this report

Don’t miss this information from CNBC Pro

OpenAi: Chatgpt will reach 700 million active users per week this week

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button