blog posts

Introducing Kinds of Machine Learning methods

Introducing Kinds of Machine Learning methods

In this article, we will examine kinds of Machine learning methods, So let learn These methods.

  • Decision tree learning
  • Learn the law of dependence
  • Artificial Neural Networks
  • Deep learning
  • Inductive logic programming
  • Clustering
  • Bayesian networks
  • Representation learning
  • Sparse dictionary learning

Decision tree learning

The decision tree learning method uses a decision tree as a predictive model that writes observations about an entity into conclusions about the object’s objective value.

Learn the law of dependence

Learning the law of dependence is a way to uncover interesting relationships between variables in large databases.

Artificial Neural Networks

An artificial neural network (ANN) algorithm, commonly referred to as a “neural network” (NN), is an algorithm motivated by the structure and functional aspects of biological neural networks. Also, In this network, computations are structured in groups of artificial neurons and process information computationally. Modern neural networks are tools for the nonlinear modeling of statistical data. So These networks are commonly used to model complex connections between inputs and outputs, model data, or get a statistical structure in a twin chance distribution among observed variables.

Deep learning

In recent years, reducing the cost of hardware and producing GPUs for personal use has helped create the concept of deep learning, which consists of several hidden layers in an artificial neural network. Also, This method tries to model how the human brain processes light and sound into sight and hearing. In addition, Some successful applications of deep learning include machine vision and speech awards.

Inductive logic programming

Inductive Logical Programming (ILP) is a way to show learning by using logical programming as a uniform presentation of input (data), background details, and hypotheses. With an encoding of general background knowledge and a set of examples displayed as a database of facts, an ILP system extracts a logical program that returns all the positive models and does not result in one of the opposing examples. Inductive programming is a related discipline that includes programming language to represent hypotheses (and not just logical programming), such as available programs.

Clustering

Cluster analysis means giving a set of statements to subsets (called clusters). The ideas within a cluster are similar according to a predetermined criterion or criteria, and the comments in other clusters are dissimilar. Different clustering techniques have various hypotheses about data structure, often defined by a similarity metric and evaluated, for example, by internal condensation(similarity between members within a cluster) and separation between different clusters. Other methods are based on calculated density and graph correlation. Clustering is an unsupervised learning method and a standard method for analyzing statistical data.

Bayesian networks

Bayesian network, belief network, or non-circular directional graph model, is a probabilistic graph model that shows a set of random variables. Also and their conditional independence by a non-circular directional graph (DAG). For example, the Bayesian network can predict the potential links between diseases and symptoms. With symptoms, the network can calculate the likelihood of different conditions. There are practical algorithms that do inference and learning.

Representation learning

Some learning algorithms, mainly unsupervised learning algorithms, aim to represent better the inputs provided in training. Classic models in this area are principal component analysis and cluster analysis. Demonstration learning algorithms often try to retain information in the inputs. Still, they want to turn it into a way to make the helpful input, often in the preprocessing stage before classification or prediction, and the ability to reconstruct the information. Provides data coming from an unknown data-generating allocation while not necessarily being loyal to unlikely features under that distribution.

Manifold learning algorithms try to do the same because the learned representation has a low dimension. Thin coding algorithms try to do the same thing with the limitation that the intellectual picture is narrow (that is, it has many zeros).

Multilinear subspace learning algorithms learn low-dimensional representations directly from multidimensional data tensor representations without giving them vector (high-dimension) shapes. So Deep learning algorithms discover several levels of expression, or a series of attributes, that higher level and more conceptual pieces are limited(or produced) in terms of lower-level components. Also, It has been argued that an intelligent machine is a machine that learns a demonstration that distinguishes the fundamental aspects of change that describe the experimental data.

Sparse dictionary learning

In this method, data is represented as a linear mixture of essential parts, and it is believed that the coefficients of this dictionary variety are thin. Suppose x is a d-dimensional data and D is a d-matrix in n, each column representing a base function. R is the coefficient of expression x using D. Mathematically, learning a thin dictionary means translating an x ≈ Dr device where r is light. In general, n is assumed to be larger than d to provide freedom for the most delicate model.

In addition, Learning a dictionary with thin models is “strongly NP-hard” and challenging to approximate—a popular creative way to learn a lean K-SVD dictionary.

Lean dictionary learning has been used in several contexts. In category, the issue is choosing the class to which the unknown data belongs. Suppose a dish is already made for each type. A new data is then linked to a class whose class dictionary gives the best picture of that data. Lean lexicon learning has also been used to reduce image noise. The key idea is that a “clean” image can be represented sparsely by an image dictionary, but noise can not.

Law-driven machine learning

Rule-based machine learning is a broad term for any machine learning method that places, learns, or figures “rules” to store, control, or use knowledge. A rule-based learning machine is a title and use of rules that collectively define the knowledge learned by the system. This method varies from other machine learners, who generally know a single model in all cases for prediction. Also, Law-based machine learning methods include learner category systems, reliance law learning, and artificial safety systems.

Machine learning applications

Applications of machine learning retain the following:

  • Prove the proposition automatically
  • Comparative websites
  • Emotional artificial intelligence
  • Bioinformatics
  • Brain and computer interface
  • Informatics Chemistry
  • Classification of DNA strands
  • Computational Anatomy
  • Machine vision, including object identification
  • Identify a fake credit card
  • General game playing
  • information recovery
  • Identify Internet scams
  • Linguistics
  • Marketing
  • Machine learning control
  • Machine perception
  • Medical diagnosis
  • Economy
  • Insurance
  • Natural Language Processing
  • Natural language inference
  • Optimization and meta-heuristic algorithms
  • Online advertising
  • Recommender systems
  • Robot movement
  • Search engines

Softwares

The following are some software packages that have miscellaneous machine learning algorithms

Free and open-source software:

  • CNT
  • Deeplearning4j
  • Idlib
  • GNU Octave
  • H2O
  • Mahout
  • Mallet
  • MEPX
  • mlpy
  • MLPACK
  • MOA (Massive Online Analysis)
  • MXNet
  • ND4J: ND arrays for Java
  • NuPIC
  • OpenAI Gym
  • OpenAI Universe
  • OpenNN
  • Orange
  • R
Conclusion

Therefore in this article, we explained the different kinds of Machine Learning methods and every necessary thing about them.

Source:https://mediasoft.ir/%db%8c%d8%a7%d8%af%da%af%db%8c%d8%b1%db%8c-%d9%85%d8%a7%d8%b4%db%8c%d9%86-machine-learning-%da%86%db%8c%d8%b3%d8%aa-%da%86%d9%87-%d9%85%d9%81%d8%a7%d9%87%db%8c%d9%85%db%8c-%d8%af/