One Of The Most Important Tasks Of Artificial Intelligence Specialists And Learning Machine Is Data Collection And Analysis.
It is needed both in defining model inputs and in obtaining evaluation criteria for experiments. In its simplest definition, data collection is the process of getting data about the phenomenon under study.
There are several ways to get data. So In some cases, the data is available in formatted documents, and the problem is finding and accessing it. Then In other cases, the data are obtained using questionnaires, field surveys, and physical tests.
The required data can be obtained from valid documents in larger and larger projects, such as urban or economic models. Shared data sources for some models are census reports, summaries of statistics released by government agencies, and similar sources. In business systems projects, one of the most valuable data sources is corporate accounting and engineering documents.
These documents are sometimes helpful for obtaining estimates of product demand, production costs, and other related data. However, you should note that they are the only starting point.
Questionnaires and field surveys are also potential ways to obtain data related to industrial projects.
With the advent of online and continuous data collection systems, data acquisition has become a semi-continuous process because data is easily accessed in a database or computer documents in a structured way.
Physical experiments are typically costly and time-consuming because they require the measurement, recording, and editing of data. In addition, you should be very careful when planning these types of tests to make sure that the test conditions describe the actual needs and that the data are recorded correctly.
Physical experiments are typically costly and time-consuming because they require the measurement, recording, and editing of data. In addition, you should be very careful when planning these types of tests to make sure that the test conditions describe the actual needs and that the data are recorded correctly.
Physical experiments are typically costly and time-consuming because they require the measurement, recording, and editing of data. In addition, you should be very careful when planning these types of tests to make sure that the test conditions describe the actual needs and that the data are recorded correctly.
However, in some cases, data may not be available, or the existing budget or the nature of the system may preclude testing.
A clear example of this is the proposed location plan on an assembly line. One way to get data in such scenarios is to use different or pre-available data. In this method, activity duration estimates are performed using standard data tables. Another way is to use data obtained from similar or equivalent activities.
In both cases, data collection to define model inputs and data collection to evaluate system performance using the model, we see a problem called converting raw data into a usable form. It is why methods designed to summarize or describe essential features of a data set are necessary. These methods summarize data in exchange for deleting some information.
Data grouping
A method refers to converting data in a way that makes it easier to process. Grouping of data is done in the form of smaller batches and collections. The data are then summarized by tabulation and specification of the group to be included. This type of table is also called the frequency distribution table. So A table that provides a good description of the information.
There are many different types of distribution tables that are useful for displaying grouped data. One of them is cumulative frequencies, obtained by successively adding the frequencies in the frequency table.
Artificial intelligence professionals have another tool called multiplication distribution, which is obtained by dividing the data into a whole set. Frequency distribution tables are helpful when comparing two or more statistical distributions. Frequency and cumulative distributions are sometimes illustrated to increase the interpretability of data.
The most common type of image representation is the histogram, which shows the frequencies of both groups in the form of a rectangle whose length indicates the frequency of the group.
Important points when preparing a frequency distribution table
First, you need to specify the number of groups and each group’s upper and lower limits. These choices depend on the nature and end-use of the data.
Consider the width of the groups as equal as possible. Of course, there are exceptions needed in this regard.
The distances of the groups should not overlap, and each point of information should be related to only one group.
Usually, use a minimum of 5 and a maximum of 20 groups.
Parameter estimation
If a set of data contains all possible observations of a random variable, it is called a population, and if it has only a part of the observations, it is called a sample. Another way to summarize a data set is to consider the data as a sample used to estimate the relevant population parameters.
The most commonly used population-related parameters are the mean or scale of centrality and variance, the dispersion scale. There are two essential points to consider when estimating population parameters from sample data.
The first is that we record the sample and only the amount of each observation regardless of its time. Statistics obtained from the recording of time-independent models are known as observational statistics. The second point is for variables whose values are defined concerning time.
Distribution estimates
While the properties of hypothetical distributions help the model select the appropriate theoretical distribution, it is best to test this hypothesis with one or more tests. Chi-square and Kolmogorov Smirnov tests are well-known tests in this field.
Simulation model
To prepare a simulation model, the modeler must choose a mental framework for describing the system. This framework or perspective describes the general approach in which the functional relationships of the system can be observed and described. Systems models can be divided into two groups with discrete changes and continuous changes. These two terms are related to the model and not to the actual system.
Each system can be modeled as discrete changes and continuous changes. Time is the most important independent variable in most simulations, and other simulation variables function time and dependent variables.
Then In the simulation discussion, discrete and piezo describe the behavior of dependent variables. And in discrete simulation, dependent variables change at specific points in time called event times.
Tough In such models, the time variable is continuous or discrete, depending on whether the dependent variables can occur at any point in time or occur at specific points in time.
In a continuous simulation, the dependent variables of the model may change permanently over the simulation time.
Suppose the values of the system variables are only available at certain time intervals. In that case, the simulation is performed discretely in terms of time. If the importance of the system variables is functional at any point in time, the simulation is continuous in terms of time.
Of course, there is another mode, which is called a combination simulation. In this method, the dependent variables of the model may change discretely, continuously, or continuously with discrete jumps. Here the time variable may be continuous or discrete.