Machine Vision: Robot Bionic Vision Technologies
Machine vision is a branch of technology with the help of which computers and robots can see the images of the surrounding world and analyze them. You must have seen the Wall-E animation; A cute little robot who lives alone (if you don’t count his little cockroach) in the wasteland of planet Earth and builds towers out of leftover garbage. Before starting his exploratory and exciting journey, Wall-E goes to the garbage every day according to the schedule, observes and examines them carefully, takes everything that catches his attention, and puts the rest into a cube of garbage. converts
But the story becomes interesting when on one of these repeated days, our little robot encounters an unfamiliar but stunning phenomenon, a plant. And Wall-E’s exciting and adventurous journey begins. But have you ever wondered why Wall-E didn’t destroy the plant like trash? The answer lies in machine vision technology.
What is Machine Vision?
The easiest way to understand the concept of machine vision is to think of this technology instead of the “eye” of the machine. Machine vision is an advanced technology that automatically enables intelligent systems to search and analyze the surrounding environment. With the help of this technology, automatic surveying of the surroundings, process control, and robotic guidance are possible through image processing.
It is important to know that when we talk about machine vision, we can find its traces in different branches of technology, such as software and hardware products, integrated systems, actions, methods, and diverse expertise. Machine vision is a new ability for computer systems that, along with other technologies, helps us find new ways to solve problems and issues.
Of course, this concept should not be confused with computer vision. Machine vision depends on a camera or similar system that connects to the robot and receives images of the surrounding world. For a better understanding of simple language, we can consider that machine vision works as the eye of the machine and computer vision as the brain processes the images received from the eye. Consequently, without computer vision, there is no machine vision at all! So up to this point, we understood that machine vision allows robots to see their surroundings.
Back to Wall-E’s story, Wall-E can see the surroundings and objects with the help of machine vision technology. With the help of this knowledge, he takes the things he likes for himself and destroys the rest. Let’s be a little more precise; If we take a quick look at Wall-E’s collection of things, we will find that his favorite things are things that he cannot find the same every day, things that are somehow “different” and “rare”! But how does a robot understand these things?! To answer this question, we must first see how car vision works!
How does machine vision work?
Since we humans design the knowledge of machine vision. As a result, its process is largely similar to the process of human vision. Don’t worry if you still don’t understand how machine vision works; Instead, let’s learn a little about how our vision works! To see anything, light is needed first. The light hits the object and returns to our eyes; until this point, the process of seeing by humans and machines (where the camera plays the role of eyes) is the same.
In the human eye, receptor cells take the light received from the object, convert it into an electrical signal and send it to the brain (yes, our brain also works with electricity!). From this point on, the brain takes pains to process the received image, comparing it with other available information and recognizing the object’s identity.
We can claim that the same process happens in machine vision with a little exaggeration. The camera receives the reflected light, converts it into a digital signal, and sent to the processor circuit. In this part, the process of image processing begins, and the processor compares the image information with the data it already has if it finds a similar item, it recognizes and announces the object’s identity. But what if he had never seen such a device before?
Well, here we have to start using machine learning technology! If the machine’s controller software is equipped with machine learning, it will register this object as a new device and essentially “learn” the properties of this device. So far, we have briefly understood how machine vision works, but what has been discussed so far is a drop in the ocean of this science! There’s still a lot we don’t know about machine vision, so let’s take a closer look at how it works.
Machine vision imaging process
The first step in machine vision is imaging. A system equipped with machine vision technology (for example, a robotic arm) uses a device to take pictures of the environment. This device is often a type of camera that can be separated from the image processing unit or combined to create a smart camera or smart sensor. According to the intended application, various tools such as photoresistors (photocells), digital cameras, 3D cameras, temperature cameras, and smart cameras can be used.
Typically, a machine vision system uses conventional 2D imaging under standard lighting conditions. However, if special lighting or imaging is needed to detect the details of a piece, multispectral imaging, hyperspectral imaging, infrared imaging, linear imaging, 3D imaging, and X-ray imaging can be used. The main difference is that images obtained from 2D lighting are often monochrome, while complex imaging captures information such as color, frame rate, and resolution. Complex imaging is used to track moving objects.
Well, about Wall-E, we can almost be sure that our little robot has advanced cameras, but when it comes to color recognition, we’re not sure! Why? The image below shows the world from the perspective of Wall-E, and as you can see, the world is monochromatic from the perspective of Wall-E!
How did Wall-E know the difference between the plant and the garbage (or, don’t take it too hard, that it’s special!)? It is important to know that machine vision is more than taking pictures, processing images, and recognizing objects by color! So keep reading this article to solve the puzzle!
Once the picture is taken, it is sent to a CPU, GPU, FPGA, or a combination of these for processing. According to the size and complexity of the system, the type and accuracy of the machine vision tool and the system’s processor are determined. For example, the processor required to inspect 12 parts per day will be different from the processor suitable for a more complex process, inspecting 12 parts per minute.
In the case of the second processor, the amount of data has increased greatly, and the second processor will be more complex and accurate. If the machine vision system is to implement machine learning and deep learning technologies, it will need an advanced and more complex processor. Image processing is the second and one of the most important steps in the machine vision process, and the information obtained in this part is used to complete the final result that is shown to the user.
A typical image processing process is generally done using tools such as filters applied to the image to modify it. Then, the properties of the objects in the image are extracted, such as their shape and details. The desired data such as barcode, size, zip code, and other information included in the image are read, just like when a product’s barcode is scanned in a store and its information is registered in the system.
In the next step, this data is transferred to the processing unit, and the processor decides what to do with this part or object. A wide range of filters and image processing methods can be applied to images by machine vision technology and extract various information from them. Which filter and method should be used depends on the purpose and application of the system. The image processing process is divided into the following sections:
Thresholding and pixel counting
In this part, parts of the image are cut if needed. Doing this requires the system to consider a base value for a color between black and white (i.e., gray) and, with the help of this base value, distinguish the black and white parts of the image. This action, called thresholding, helps the system recognize objects in the image and separate them from other details in the image. After thresholding, the pixel-counting step begins.
As you know, digital images are made up of very small colored or black-and-white squares called pixels that make up the whole picture. In the pixel-counting stage, the number of white and black houses is counted separately; this operation is usually done with the help of pixel-counting sensors. The pixel counting process is used in automatic packaging systems; In this way, the pixel counter sensor recognizes the bottle labels using the combination of black and white pixels and obtains an image of the entire bottle.
Segmentation, edge detection, and color processing
In this part of the image processing process, the digital image is divided into different parts so that by simplifying or changing the image, it can be analyzed more easily or extract more meaning from it. Also, by segmenting the image, the processor system can more easily categorize the objects in the photo. With the help of edge detection, the machine vision system can detect the edges of any part or object in the photo and distinguish that device from another.
It may seem simple, but recognizing objects for computers is not as easy as the human brain! In machine learning, edge detection helps the system learn how to detect the edges of different objects and categorize them more easily. If the system is equipped with color cameras or color detection sensors, it is much easier to distinguish parts and objects from each other and categorize them.
Machine learning, deep learning, and neural networks
Here the image processing is finished, and the information processing begins with the help of the basic information obtained from the image. With the help of three technologies, machine learning, deep learning, and neural networks, basic information is processed in the machine vision system with higher speed and accuracy. The three technologies help the machine vision system process information more powerfully. With the help of these three technologies, the machine vision system can better understand which data is valuable. This capability greatly helps the system in cases involving large numbers and complex processes.
Pattern recognition and information reading
With the help of pattern recognition, the machine vision system can find, recognize and count certain patterns during the process. Recognizing different patterns from each other or finding complex patterns can be taught to a machine with the help of machine learning technology or deep learning. Examples of this would be objects that are rotated, hidden behind another object, or have different dimensions.
Information reading is a feature that allows the machine vision system to read the information on labels or objects through a data matrix such as QRcode, barcode, or radio frequency tag (RFID). Examples of the use of this feature can be seen in stores; For example, some clothes have a barcode on their tag that can be scanned to access information such as country of manufacture, material, and washing requirements. The level of information a machine vision system can read is different; for example, the information inside an RFID tag is more comprehensive and complex than a barcode.
Character recognition and measurement
Like the ability to read information, character recognition allows the system to read text and numbers such as product serial numbers. The more complex the text, the more important it will be to improve the capabilities of the machine vision system by training it through machine learning or deep learning. The measurement capability provides this possibility so that the system can measure the dimensions and size of the objects in the image.
With the help of measuring ability, the system can identify the object’s dimensionsferent measurement modes such as pixels, inches, millimeters, length, time, weight, etc.
Deciding on the outcome
At this stage, all the previous stages are loaded! Now, with the help of the information obtained from the previous steps, the system decides what to do with this part. For example, the characteristics of the part are measured with the required standards, and if it does not have the required quality, it is sent to the defective part. Otherwise it will continue on the production line. Another example is that sometimes we deal with different parts in the production line, and each of them must is directed to a specific destination. In this case, the machine vision system recognizes the identity of each piece and sends it to the intended destination.
OK! Now that we understand the workings of machine vision in full detail, it’s time to get to the answer to the Wall-E puzzle! Ready to find out the answer? You’ve probably figured it out by now, but just to be sure, let’s go over this puzzle one more time. The question is, why did Wall-E separate the plant from the waste?
Knowing the operation of the system equipped with machine vision (which in this case is Wall-E), we know that the system examines the image of the parts and recognizes its identity with the help of details. Poor Wall-E has been sorting garbage for years, so he’s familiar with their properties! Any garbage that needs to be destroyed, such as soda cans, paper,, or broken electronics, has been seen by Wall-E many times and registered in its memory as garbage.
So every time Wall-E comes across a duplicate piece of garbage, it recognizes and destroys it with the garbage recognition pattern it learned years ago (probably with the help of machine learning). Now, on one of the repetitive days, Wall-E encounters something he has never seen before; a plant. Wall-E captures its image, and compares it to the pattern of debris and every other device he’s seen so far, but it remains unanswered! There is no memory or data related to the plant in the Wall-E memory!
As a result, the plant’s identity as garbage is not confirmed, and Wall-E separates it from the rest, and thanks to machine vision knowledge, the fate of planet Earth is changed! Wall-E animation can be considered a complete lesson about robotics, machine vision, and machine learning; In this film, we see various applications of machine vision technology, which may seem exaggerated, but are rooted in reality. But in today’s world, what is machine vision used for?
Applications of machine vision
The primary application of machine vision technology is image-based mechanical inspection, sorting, and guidance. Machine vision technology is installed on a robot to help the robot recognize where to put or take parts. With the help of this technology, it is possible to create intelligent lines of robots that automatically check and analyze parts along the production line, remove them and place them elsewhere if necessary, and finally control and direct the entire product line.
Suppose we add a spectrometer camera to the system. In that case, the robotic line will be able to recognize colors while measuring and checking parts and use this information to measure things better. However, adding such details reduces the system’s response time, because the processor part will need more time to process the information. With the progress of science, the world of programming for software has become so limitless that we can design and implement different control systems according to the specific needs of each industry, from the food industry to the automobile.
A system equipped with machine vision technology can measure and relate a wide range of objects and parts according to the desired industry. Of machine vision in a wide range of industries, such as automotive, electronics and semiconductors, food and beverages, road traffic, vehicles and intelligent transportation systems, medical imaging, packaging, labeling and printing, pharmaceutical sciences, development, Science, and TV broadcasting can be used.
This technology is placed next to other fields of science, such as deep learning and machine learning, and helps businesses to understand better and process existing data and also helps them to increase the efficiency of existing products; For example, the BMW company uses these technologies related to artificial intelligence and machine learning to improve the performance of its cars, the same thing happens in the Tesla car company, which focuses on the production of self-driving cars.
Machine vision in robotics
Today, the use of robots has increased significantly, so the use of machine vision technology on robots has become particularly important. This technology helps robots have higher accuracy, better orientation, and easier understanding of issues. As a result, they will be able to inspect a part more accurately, place it in the right position faster, and solve more complex tasks in less time. This feature allows operators to control robots more easily and in two movement axes.
In most cases, a single camera is installed on the robot, or a dual-camera system is used to provide higher accuracy for complex activities such as sorting products or placing and removing products with the help of robotic arms. However, laser scanning is another possibility that can be used to shine a light on the stripe and identify the defective product. Similarly, 3D cameras can create a 3D map of the product or part of it.
Traces of machine vision in everyday life
Despite the widespread use of robots in industry, robots have not yet entered our daily lives. However, such a day is not far. When you are reading this article, the technologies that use computer vision are all around you. For example, Google Translate software can easily translate a photo of a handwritten text; smartphone unlocks with facial recognition, and smartphone health apps all use computer vision technology.
An example of machine vision in human life is a small and salty robot called Cozmo. It is mostly made for entertainment and can recognize its position, move, play, return itself to the right position if it falls, and Connect to a mobile phone and personal voice assistant. It is interesting to know that Cosmo’s design was inspired by one of the robots in the movie Wall-E. (Remember this little obsessive?) Cozmo can connect to your smart home system and control environmental features like lighting and temperature.
If you’re into science fiction movies, chances are you’ve come across characters with machine vision technology many times by now. The entire computer world of the movie Matrix (1999), the villain of the movie Space Odyssey (1968) or the computer Hall 9000, the robotic boy David of the movie Artificial Intelligence (2001), and many other characters have all benefited from machine vision. With that in mind, here’s an interesting fact (are you ready for a surprise?): You’ve already been familiar with machine vision before reading this article!
The astonishing advancement of technology can be both frightening and promising; Maybe a day will come when the scary world of the Matrix movie will become a reality and humans will be defeated by machines, or maybe vice versa; One day, a small robot like Wall-E could make humans once again hold hands and unite to restore life on Earth. What do you think? What do you think the future development of the world of cars will have? You can share your comments at the bottom of this page. We are waiting for your comments!