DED9

What is GPT-3 and What are uses of it ?

 OpenAI recently released the largest natural language model, the GPT-3, similar to its predecessors. You may be wondering what is GPT-3? This model differs primarily from other versions. By having ten times higher than the previous largest model and training in a much larger data set. Numerical differences of this magnitude are given to the GPT-3 exercise.

This allows it to achieve qualitative improvements over its former competitors. Unlike other versions, the trained GPT-3 model can do many tasks without being trained for them. This case has met with much acclaim both in technology and news. Several studies address its myriad uses and several key limitations. Although GPT-3 has made significant progress, some limitations will be addressed later.

Where did the GPT-3 story begin?

On May 28, OpenAI published an article entitled “Language Models Are Fast Learners.”. Introduced the GPT-3 as the largest language model ever built. This 73-page article demonstrates how new GPT-3 trends follow artistic advances in language modeling. The GPT-3 achieves promising and competitive results in natural language processing benchmarks.

The GPT-3 shows the increase in performance from using a larger model. And there is a huge increase in the model and size of information, which describes the latest developments in NLP. The main message of this statement is more about the performance of this model in the benchmarks. It was about this discovery that the GPT-3 could perform tasks in NLP that it had never encountered before because of its scale. Solve after seeing once or a few examples. This is in contrast to what is being done today. That models for new commands must be practiced with vast amounts of information.

This image shows the GPT-3 method (left) and the old tuning method (right). The model’s internal displays are set for new information when gradient updates are performed.

Last year, OpenAI developed the second version of the GPT. Which could produce long and coherent texts that were difficult to distinguish from human writing. OpenAI states that it has used the GPT-2 model and structure in its new product. But the difference is the size of the network and the information it teaches. Is much larger than its predecessors. The GPT-3 has 175 billion components compared to the GPT-2, which has 1.5 billion components, and the GPT-2, which has over 40 billion gigabytes of text. GPT-3 is trained on 570 billion gigabytes of text. However, increasing this scale is not an innovation. GPT-3 is important because of the few-shot learning options shown in the following example with different examples of natural language activities.

 

Examples of GPT-3 Answering the questions on the tab

Following the article’s release, it was stated that on June 11, GPT-3 would be available to third-party developers through the OpenAI software development interface. The first promotional product and in the beta testing phase. GPT-3 access is by invitation only and is not priced yet. After broadcast by the OpenAI software programming interface, due to the unique demonstrations of GPT-3 and its potential (as well as writing short articles, blog posts, and producing creative fiction texts). There was much debate within the AI ​​community and beyond. One of the best examples of this potential is the creation of Tahna JavaScript with a simple explanation in English.

Using GPT-3, I created a page processor to prepare JSX code for you by explaining a template.

Here you will find a GPT-3 plugin that can create a fake website similar to the original version by getting an URL and description.

After hours of thinking about how this works, I tested a great GPT-3 demo. I was amazed by the coherence of the GPT-3 test and its delicacy. Let’s try the basics of arithmetic with this.

After receiving academic access, I thought about GPT-3 software and its knowledge in the languages ​​section. With this in mind, I came up with a new demo, the use of accessories; what can be done with an object?

GPT-3 Feedback

The media, experts in the field, and the vast technology community have differing views on the capabilities of the GPT-3 and its implementation on a larger scale; Comments include optimism about greater human productivity in the future and fear of losing jobs, as well as careful consideration of the capabilities and limitations of this technology.

What is the media feedback on GPT-3?

Media coverage of this issue has increased since the demo versions were released:

What is the feedback of artificial intelligence experts about GPT-3?

Contrary to media reports, the feedback from machine learning experts and natural language techniques. Was more due to curiosity and focused on using GPT-3. And find out how capable he is of fully understanding human language.

Machine learning researcher Dilip Rao responded to this by posting that the atmosphere created in cyberspace about emerging technologies could be misleading. GPT-3 and later versions of learning technology with little data or few-shot learning from the research stage to the operational stage. But any technological leap comes from the sheer volume of conversations. And debates within social media can distort our thinking about the true capabilities of these technologies.

 

What is the feedback of the edge science industries of the world about GPT-3?

The commentators had different approaches to the technology industry, and a number explained the concepts of artificial intelligence programming.

“Simple coding contexts are hard to come by,”. Said Brett Goldstein, entrepreneur, and former Google Product Manager. In response to how the GPT-3 can be coded based on human-given specifications. This may also be the design case; Many companies will want to use the GPT-3 rather than hiring expensive machine learning engineers to practice their less powerful models. “Data scientists, customer support agents, legal assistants, and many other occupations are at great risk.”

OpenAI CEO Sam Altman responded: “Although we have made great strides in artificial intelligence with this technology, there are still many areas of AI that humans have not yet mastered.

There’s a lot of hype surrounding the GPT-3, and that’s interesting. But it has its weaknesses, and sometimes it makes stupid mistakes. Artificial intelligence is set to change the world in the future, but GPT-3 is just one source. There is still a lot to find.

In short, many experts gave interesting examples of comparing natural language with GPT-3. The media and tech communities have both congratulated OpenAI on its progress. At the same time, they warned that this could lead to huge technological turmoil in the future. However, the CEO of OpenAI agrees with researchers and critics of this technology. And he knows that GPT-3 shows a huge leap forward in artificial intelligence, but he can’t understand language. And that there are significant problems with using this model in the real world. AI can call these problems orientations and training time.

What are the limitations of GPT-3?

The new few-shot learning system and the capabilities that GPT-3 has demonstrated in the field of artificial intelligence and the increasing advances in this field, and the fact that only by changing and enlarging the scale used in existing systems to It is great to make such progress.

But the extraordinary results he has shown in his abilities have caused a great deal of controversy. We will make some remarks about the need to reduce this commotion. In general, the abundance of GPT-3 capabilities and the ability to do different things and jeopardize related jobs, such as the phrase “the artificial intelligence of coders and even all industries may retire and be discarded,” raises concerns about this issue. Although the GPT-3 shows significant advances in language models, it lacks real intelligence and cannot fully replace staff.

After all, the GPT-3 is similar to all its predecessors, only more advanced. Although enlarging the training scale has yielded excellent results, the GPT-3 has limitations.

What is the impact of GPT-3 on future jobs?

While technologies like GPT-3 could change the nature of all jobs in the future, that does not necessarily mean that those jobs will disappear. Just as the adoption of new technologies is usually a long and slow process, many artificial intelligence technologies will help humans along the way, rather than replacing them with jobs. Of course, the latter will be much more likely because artificial intelligence models need to be monitored by humans to avoid potential flaws. Looking at the example of web development, someone with technical knowledge and expertise should code and correct the GPT-3 code.

Computer Vision, which made many leaps before NLP, raised similar concerns about how artificial intelligence could take over several therapeutic system jobs. But instead of taking jobs and replacing doctors like radiologists, artificial intelligence can make their workflow easier. Stanford radiologist Curtis Langloetz says artificial intelligence will not replace radiologists, but radiologists who use artificial intelligence will replace those who do not. The same may be true of the GPT-3; But in the end, it’s just a model, and the models are not perfect.

Some believe that GPT-3 is a big step towards artificial intelligence, or in other words, general artificial intelligence; Similar to what humans have. While it shows its progress, it is important to talk about something important in the face of these upheavals. Emily Bander, a computational linguist from the University of Washington, and Alexander Coleraz, University of Saarland, recently proposed the octopus test. In this experiment, two people live on a remote island and communicate via a cable on the ocean floor.

The octopus can listen to their conversations. It acts as a proxy for language models such as the GPT-3. Finally, if the octopus can impersonate one of the two and succeed, the test is accepted. But the two researchers also gave examples of situations in which octopuses could not survive the experiment, such as building equipment or self-defense. This is because these models can only deal with texts and do not know anything about the real world that has a significant impact on linguistic comprehension.

Improving GPT-3 and later versions can be similar to preparing the octopus for the best it can do. As models like the GPT-3 become more sophisticated, they will show different strengths and weaknesses; But just learning from the written texts is a model taught in big data. Real understanding comes from the interaction of languages, minds, and the real world, which AIs like GPT-3 cannot experience.

Exit mobile version