While robot football has helped to coordinate and focus research in some specialized skills, research involving broader abilities is fragmented. Sensors—sonar and laser rangefinders, cameras, and special light sources—are used with algorithms that model images or spaces by using various geometric shapes and that attempt to deduce what a robot’s position is, where and what other things are nearby, and how different tasks can be accomplished. Faster microprocessors developed in the 1990s have enabled new, broadly effective techniques. For example, by statistically weighing large quantities of sensor measurements, computers can mitigate individually confusing readings caused by reflections, blockages, bad illumination, or other complications. Another technique employs “automatic” learning to classify sensor inputs—for instance, into objects or situations—or to translate sensor states directly into desired behaviour. Connectionist neural networks containing thousands of adjustable-strength connections are the most famous learners, but smaller, more-specialized frameworks usually learn faster and better. In some, a program that does the right thing as nearly as can be prearranged also has “adjustment knobs” to fine-tune the behaviour. Another kind of learning remembers a large number of input instances and their correct responses and interpolates between them to deal with new inputs. Such techniques are already in broad use for computer software that converts speech into text.

In 1970, Japanese roboticist Masahiro Mori introduced the concept of the “uncanny valley,” which suggests that as an object’s design becomes more human-like, people’s attraction to it increases. However, this affinity sharply declines when the resemblance approaches perfection, leading to feelings of eeriness or discomfort. Once the design achieves true human likeness, the affinity rises again. This fluctuation creates a “valley” in the affinity levels.
This article explores the evolution of robots and robotics. For more details on industrial uses, refer to the article on automation.
(Read Toby Walsh’s Britannica essay on killer robots.)
Machines that exhibit flexible behavior and possess some human-like physical characteristics have been created for industrial applications, despite not resembling humans. The first stationary industrial robot, known as the programmable Unimate, was an electronically controlled hydraulic arm capable of executing complex sequences of movements. This innovation was introduced in 1954 by American engineer George Devol and further developed by Unimation Inc., a company established in 1956 by engineer Joseph Engelberger. In 1959, a prototype of the Unimate was showcased at a General Motors die-casting plant in Trenton, New Jersey. The following year, Condec Corp., which had acquired Unimation, delivered the first production-line robot to the GM factory, tasked with the challenging job of removing and stacking hot metal components from a die-casting machine. Unimate arms are still being developed and marketed by various licensees globally, with the automotive sector being the primary consumer.
In the late 1960s and 1970s, more sophisticated computer-controlled electric arms, guided by sensors, were developed at the Massachusetts Institute of Technology (MIT) and Stanford University. These arms were utilized alongside cameras in robotic hand-eye research. Victor Scheinman from Stanford, in collaboration with Unimation for General Motors, created the first industrial electric arm known as PUMA (Programmable Universal Machine for Assembly). Since its introduction in 1978, PUMA has been instrumental in assembling various automobile subcomponents, including dashboard panels and lights. Its design inspired numerous imitations, and its successors, both large and small, continue to play a vital role in light assembly tasks across electronics and other sectors. Since the 1990s, smaller electric arms have gained significance in molecular biology labs, where they meticulously manage test-tube arrays and perform precise pipetting of complex reagent sequences.

Mobile industrial robots made their debut in 1954 with the introduction of a driverless electric cart by Barrett Electronics Corporation, which began transporting loads in a grocery warehouse in South Carolina. These machines, known as Automatic Guided Vehicles (AGVs), typically navigate by following signal-emitting wires embedded in concrete floors. In the 1980s, AGVs were enhanced with microprocessor controllers, enabling them to perform more complex tasks than those managed by basic electronic controls. By the 1990s, a new navigation technique gained traction in warehouses: AGVs equipped with scanning lasers could determine their position by measuring reflections from fixed retro-reflectors, requiring at least three to be visible from any given location.
Industrial robots originated in the United States, but the sector struggled to flourish there. Unimation was purchased by Westinghouse Electric Corporation in 1983, only to be closed down a few years later. Cincinnati Milacron, Inc., another prominent American manufacturer of hydraulic arms, divested its robotics division in 1990 to the Swedish company Asea Brown Boveri Ltd. The only remaining American entity in this field is Adept Technology, Inc., which was established from Stanford and Unimation to produce electric arms. Meanwhile, foreign licensees of Unimation, particularly in Japan and Sweden, continue to thrive, and during the 1980s, various companies in Japan and Europe began to actively enter the robotics market. The challenge of an aging workforce prompted Japanese manufacturers to explore advanced automation solutions even before they yielded significant returns, thereby creating opportunities for robot manufacturers. By the late 1980s, Japan—dominated by the robotics divisions of Fanuc Ltd., Matsushita Electric Industrial Company, Ltd., Mitsubishi Group, and Honda Motor Company, Ltd.—emerged as the global leader in the production and utilization of industrial robots. Similarly, high labor costs in Europe spurred the adoption of robotic alternatives, leading to industrial robot installations in the European Union surpassing those in Japan for the first time in 2001.

The unreliability of functionality has hindered the growth of the market for industrial and service robots designed for office and home use. In contrast, toy robots can provide entertainment without the need for consistent task performance, and mechanical versions have been around for centuries. The emergence of microprocessor-controlled toys in the 1980s marked a significant advancement, allowing them to respond to sounds or light with movement or speech. By the 1990s, these toys became even more sophisticated, capable of recognizing voices and words. A landmark development occurred in 1999 when Sony Corporation launched AIBO, a robotic dog equipped with numerous motors for its legs, head, and tail, along with dual microphones and a color camera, all managed by a robust microprocessor. AIBO was more lifelike than any previous robot, able to chase colored balls, recognize its owners, and adapt to its environment. Despite its initial price of $2,500, the first batch of 5,000 units sold out rapidly online.
Robot toys have seen a unique evolution compared to their industrial and service counterparts, which often struggle with reliability in functionality. While industrial robots are designed for specific tasks in office and home settings, toy robots thrive on entertainment without the need for consistent performance. Mechanical robots have a rich history, dating back thousands of years, exemplified by automata. The 1980s marked the arrival of microprocessor-controlled toys capable of responding to sound and light, paving the way for more sophisticated models in the 1990s that could recognize voices and words. A significant milestone occurred in 1999 when Sony launched AIBO, a robotic dog equipped with numerous motors for its limbs, head, and tail, along with dual microphones and a color camera, all managed by an advanced microprocessor. AIBO was more lifelike than any previous robot, capable of chasing colored balls and learning to identify its owners while exploring its environment. Despite its initial price tag of $2,500, the first batch of 5,000 units sold out rapidly online.

The initial robotics vision programs, developed in the early 1970s, employed statistical methods to identify linear boundaries in images captured by robot cameras. They utilized sophisticated geometric reasoning to connect these lines into potential object boundaries, creating an internal representation of their environment. Additionally, geometric calculations linked object locations to the required joint angles for a robot arm to grasp them, as well as the steering and driving movements necessary for a mobile robot to navigate towards or around the object. However, this method was labor-intensive to program and often faltered when unexpected image complexities disrupted the initial processes. In the late 1970s, an effort to address these challenges by incorporating an expert system for visual analysis ultimately resulted in more cumbersome programs, replacing simpler failures with new, intricate confusions.
In the mid-1980s, Rodney Brooks from the MIT AI lab seized an opportunity to initiate a prominent movement that dismissed the idea of machines developing internal representations of their environments. Instead, Brooks and his team focused on creating computer programs composed of straightforward subprograms that linked sensor inputs to motor outputs. Each subprogram was designed to perform a specific behavior, such as avoiding obstacles or moving toward a target. This method mirrored the functioning of many insects and certain aspects of larger nervous systems. The outcome was the development of captivating insect-inspired robots; however, similar to real insects, their behavior was unpredictable due to occasional sensor inaccuracies, making this approach less effective for larger robotic systems. Additionally, it lacked a direct way to define long, intricate action sequences, which are essential for industrial robot manipulators and likely for future home robots. Nonetheless, it’s worth noting that in 2004, iRobot Corporation sold over one million robot vacuum cleaners that exhibited simple insect-like behaviors, marking a significant milestone for service robots.
Researchers are actively exploring various methods to enhance robots’ ability to perceive their environment and monitor their movements. A notable application of this research is in semiautonomous mobile robots designed for exploring the Martian surface. Due to the significant delays in signal transmission, these rovers must be capable of navigating short distances independently between commands from Earth.

An intriguing area for advancing fully autonomous mobile robot technology is in football (soccer). In 1993, a global consortium of researchers initiated a long-term project aimed at creating robots that could play this sport, with their progress evaluated through annual machine tournaments. The inaugural RoboCup games took place in 1997 in Nagoya, Japan, featuring teams competing in three categories: computer simulation, small robots, and midsize robots. Achieving the ability to locate and push the ball was a significant milestone, but the event fostered collaboration among participants, leading to substantial improvements in gameplay in the following years. In 1998, Sony began supplying researchers with programmable AIBOs for a new competition category, providing teams with a consistent and reliable hardware platform for software development.
Click here for more:Easytotech.xyz
This website helped us:Cnet.com