blog posts

What is Machine Learning and What are its implications?

Machine Learning

Arthur Samuel, an American pioneer in computer games and artificial intelligence, coined the term “machine learning” in 1959 while working for IBM. It was motivated by logology and computational learning theory. Machine learning investigates the study and construction of algorithms that can learn and predict based on data – such algorithms do not follow program education. And mind Predict or make findings by modeling sample input data.

Machine learning is used in computational tasks. The design and programming of exact algorithms with proper performance are difficult or impossible; Some applications include email filtering, detecting Internet intruders, or internal malware that seek to enter information: optical character reader (OCR), ranking learning, and machine vision.

Machine Learning is a component of computer science that promotes a computer to learn without explicit programming.

machine learning

Arthur Samuel, an American pioneer in computer games and artificial intelligence, coined the term “machine learning” in 1959 while working for IBM. Encouraged by logology and computational learning theory, machine learning examines the study and structure of algorithms. That can learn and forecast based on data – such algorithms do not follow program education and follow Predict or make findings by modeling sample input data. Machine learning is used in computational tasks in which the design and programming of explicit algorithms with proper performance are difficult or impossible; Some applications possess email filtering—detecting Internet intruders or internal malware that seek to infiltrate information, optical character reader (OCR), ranking learning, and machine vision.

Machine learning is nearly related to (and often overlaps with) computational statistics, which is computer prediction and is linked to mathematical optimization. Which also presents methods, theories, and applications. He does. Machine learning is sometimes integrated with data mining; The focus of this sub-category is on experimental data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and used to learn and remember the primary form of the behavior of different creatures and then find significant exceptions.

In data analysis, machine learning is a method for designing complicated algorithms and models used for forecasting; In the industry, this is known as predictive analytics. These analytical models permit researchers, data scientists, engineers, and analysts to “make reliable and reproducible decisions and results” and uncover “hidden frosts” by learning about past relationships and trends.

According to Gartner’s 2016 hype cycle, machine learning is now at the “Peak of Inflated Expectations” stage. Effective implementation of machine learning is complex because modeling is difficult and often. Not enough training data is available, resulting in machine learning programs often failing.

A synopsis of Machine Learning

Tom Am. Tom M. Mitchell provided a practical and formal description of the algorithms studied in the field of machine learning: Having experience with E will enhance its performance as measured by P in Class T work.” This definition of the work interested in machine learning is purely practical, not just a cognitive one. This description pursues Alan Turing’s proposal in his article “Intelligence and the Computing Machine,” where the question “Can machines think?” By asking, “Can machines do what we (as thinking beings) can do?” Was replaced. Turing’s paper examines the various features that a thinking machine can have and the results of building such a machine.

All kinds of points and assignments of Machine Learning

  • Machine learning tasks are usually divided into two broad categories; Depending on whether there is “feedback” or “signal” learning in a learning system:
  • Supervised learning: A “teacher” gave the computer example inputs and expected outputs each, and the plan is to learn a general rule that takes inputs to production. The input signal may be only partly available or limited to specific feedback in some examples.
  • Semi-regulatory learning: The computer is given only one preliminary training signal: a training set that some (often many) of its target outputs are missing.
  • Active Learning: The computer can only get training tags for a limited set of samples (on a budget basis). And must also optimize the selection of objects to access the labels. These items can be delivered to the user for tagging when used interactively.
    Reinforcement learning: Educational data (in the form of rewards or punishments). It is given as feedback to program activities only in a dynamic environment. Such as driving a car or playing against an opponent.
  • Unsupervised learning: No label is given to the learning algorithm, and the algorithm itself must find a structure in the input. Unsupervised learning in itself can be a goal (finding hidden patterns in the data) or a means to an end (learning to show).

History of machine learning and communication with other domains

Machine learning goes beyond artificial intelligence. In the early days of artificial intelligence as a science field, some researchers made machines learn from data. They tried to solve this problem with various symbolic methods and called “neural networks”; These methods were often perceptrons and other models, which later turned out to be a redesign of statistically generalized linear models. Probabilistic reasoning was used, especially in mechanized medical diagnoses.

However, the growing emphasis on logical and knowledge-based methods has created a gap between AI (artificial intelligence) and machine learning. Probability systems were full of theoretical and practical problems with obtaining and displaying data. By 1980, cyber systems were gaining ground over AI, and statistics were no longer relevant.

Work on symbolic / knowledge-based learning continued within the realm of AI, leading to inductive logic programming. But the statistical trajectory of other research was beyond the realm of AI and was seen in pattern making and information retrieval. Research on neural networks was also rejected by AI and Computer Science (CS) at about the same time. This path was also pursued outside of AI / CS by researchers in other disciplines, including Hopfield, Rumelhart, and Hinton, under the name of connectionism. Their main success came in the mid-1980s with backpropagation.

Machine learning began to shine in the 1990s after being revived as a separate discipline.

The discipline shifted its emphasis from artificial intelligence to engaging with solvable problems of a practical nature. It turned its focus from the symbolic methods it inherited from artificial intelligence to the forms. And models it inherited from Statistics and odds were borrowed, transferred. The field also took advantage of digital information that was becoming more accessible day by day and the possibility of distributing it on the Internet.

Machine learning and data mining often use the same methods and overlap significantly. But while machine learning focuses on prediction based on known properties learned from training data, data mining focuses on discovering properties (formerly). The unknown focuses on the data (this is the step of analyzing knowledge extraction in the database).

Data mining uses several machine learning methods but with different purposes; on the other hand, machine learning uses data mining techniques as “unsupervised learning” or as a preprocessing step to improve learner accuracy. Much of the confusion between the two disciplines (often with distinct conferences.  And journals, except the ECML PKDD) stems from their underlying assumptions: In machine learning, performance is usually assessed by the ability to reproduce known knowledge. While In knowledge extraction and data mining (KDD), the critical activity is to discover previously unknown knowledge. One unsupervised method (an uninformed method) quickly fails other monitored methods reached to public knowledge. While in a typical KDD activity, supervised methods can be accessed due to a lack of entry to training data. Are not used.

The connection between machine learning and statistics

Machine learning and statistics are close fields. According to Michael. ال. Jordan (Micheal L. Jordan) Machine learning ideas have a long history in statistics, from methodological principles to theoretical tools. He also suggested the term data science to name the entire discipline.

Leo Breiman presented two statistical paradigms: the data and algorithmic models. The “algorithmic” model more or less means machine learning algorithms such as a random forest.

Some statisticians have used machine learning methods to complete a statistical learning trend.

Machine learning theory

An essential purpose of the learning machine is to generalize the experience. Conception in this context directs to the ability of a learning machine to have accurate performance in new and overlooked activities and examples. Based on that machine’s experience with the training data set. Academic standards come from a generally unknown allocation(which represents the event space). And the learner must develop a general model for this space that enables it to predict sufficiently accurately new cases.

Computational analysis of machine learning algorithms and their performance forms a component of theoretical computer science called computational learning theory. Because instructional data sets are delimited, and the future is uncertain, learning theory usually does not ensure the performance of algorithms. Instead, probabilistic boundaries on performance are very common. Bias-variance corruption is a way to quantify generalization error.

Last word

For best performance in the inference framework, the complexity of the assumption must be as complex as the data context process. If the hypothesis is less complicated than the function, the model has under-fitted the data. If the complexity of the model grows in response, then the training error decreases. But if the hypothesis is too complex, the model is exposed to overfit, and the generalization is weak.

In addition to functional boundaries, computational learning theorists also study the intricacy of time and the feasibility of learning. In computational learning theory, a calculation is called possible when done in polynomial time. There are two types of temporal sophistication: positive results show that certain class functions can be learned in polynomial time, and adverse effects display that specific categories cannot be retained in polynomial time.

Source:https://mediasoft.ir/%db%8c%d8%a7%d8%af%da%af%db%8c%d8%b1%db%8c-%d9%85%d8%a7%d8%b4%db%8c%d9%86-machine-learning-%da%86%db%8c%d8%b3%d8%aa-%da%86%d9%87-%d9%85%d9%81%d8%a7%d9%87%db%8c%d9%85%db%8c-%d8%af/