Give rumors about GPT-4 a miss. Engage with OpenAI’s ChatGPT for content and coding tasks
When is GPT-4 coming out? While tech enthusiasts are occupied with the rumors about GPT-4, OpenAI seems to be still hanging around with GPT-3. Earlier this week it released the advanced prototype version of chatbot GPT-3, ChatGPT that can handle complex conversations. It is a part of GPT-3.5 series, the AI series trained using a reinforcement-learning paradigm. OpenAI, revealing that ChatGPT is an attempt to make AI systems safe and useful, said it has been improvised to perform better against the novel benchmarks and classifiers. “Many lessons from the deployment of earlier models like GPT-3 and Codex have informed the safety mitigations in place for this release, including substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF)”, reads the blog. ChatGPT can engage with a range of topics including writing code, and scripts and generating research papers.
GPT 3.5, according to Open AI, learned its skills from content and code published in the year 2021. It can learn relationships between sentences, and parts of words taking content from thousands of Wikipedia entries, social media posts, and news articles. But, a report published in The Verge thinks otherwise. “AI chatbot ChatGPT has been trained to provide conversational answers to users’ queries. It’s fantastically talented but still prone to producing cogent waffle and misinformation.”, says the report. This can be because of the conditioning of the chatbot models for deriving information from only statistical regularities from the training data. The models haven’t yet achieved human-like understanding nuances and complexities of real-life circumstances. They are mere “stochastic parrots”.
Explaining the technicalities in the process of development, OpenAI says, the bot was trained through a feed back mechanism, taking help from human trainers who rated the earlier version with respect to its response to queries. The bot’s web interface clearly mentions the OpenAI’s goal in putting the system online is to “get external feedback in order to improve our systems and make them safer.” Defending its products capabilities, OpenAI says the system comes with a certain mechanism to check offensive or biased content. And apparently, it will not answer the questions pertaining to post-2021 data, for it doesn’t have any memory of it. As of now programmers and coders are tinkering with the OpenAI’s large language model only to find it worthy of trying again.
The post Are the Rumors About GPT 4 Fake? OpenAI Seems Confused appeared first on Analytics Insight.