GPT-5 Training Delayed by OpenAI as Safety Concerns Are Addressed With GPT-4

GPT-5 Training Delayed by OpenAI as Safety Concerns Are Addressed With GPT-4

Sam Altman, the CEO of OpenAI, has acknowledged that no GPT-5 language model is currently being trained by the business. During a virtual session at MIT, Altman discussed this topic and emphasized that OpenAI is now concentrating on updating and improving their current GPT-4 model. He continued by saying that the business is attempting to address any safety concerns with the model and that GPT-4 had undergone extensive training before its introduction to the public.

Altman also addressed some of the issues expressed in an open letter that purportedly bears Elon Musk’s signature and calls for a halt to the creation of large-scale language models. The letter, according to Altman, lacked technical complexity and did not offer clear instructions on where the stop should take place. He emphasized the significance of stepping up safety rigor as capabilities get more serious and stressed the value of moving carefully.

While OpenAI intends to maintain its transparency, no details regarding the architecture, design, or other internal workings of GPT-4 have been made public by the firm. This choice was made in response to worries about the competitive environment and the potential effects of large-scale language models like GPT-4 on safety.

Overall, Altman’s remarks imply that OpenAI is developing language models cautiously and is concentrating on fixing safety issues as it goes. Although the GPT-5 is not in training, the corporation is dedicated to improving and modernizing its current models in order to increase their performance and safety.

ChatGPT Bots are Spamming Reddit

Moderators are unable to handle the growing number of spam because of the overabundance of ChatGPT-generated and bot-uploaded comments on the Reddit forums. These spam bots commonly advertise pornographic content, illegal drugs, and dubious goods. Reddit’s automatic anti-spam algorithms offered little help because the bot problem was already serious, and by the time they did, it was usually too late and the purpose of the bot had already been achieved. Despite Reddit apparently banning several hundred bot accounts, the number of them keeps rising, publishing AI-generated content and overwhelming the automated systems that flag dubious content.

Post a Comment