How AI And Robotics Is Changing Our World.

Adwaitkelkar
7 min readDec 29, 2021

Artificial Intelligence is making new strides, there is a talk of a new evolution that could fundamentally change life on our planet. Artificial Intelligence and Robotics has the potential to revolutionize every aspect of daily life, work, mobility, medicine, the economy and communication. Let’s take a look upon some amazing real life AI and Robotics development and see the technology behind them.

1. Amazon Go

A new type of supermarket a year ago opened it’s doors in the US. Amazon Go! all you need here is an app and that’s it. You just need to hold the mobile phone to the scanner and you’re in. The intelligent image recognition captures your every move what do you take off the shelf, what do you put back and what do you take with you. And the best part is there are no billing lines, you can just grab your groceries and walk out and your bill will be sent on the app. Let’s understand how does it work.

Computer Vision and Deep learning

The moment your account is associated with your physical presence and cameras begin tracking your every move.

The cameras use computer vision — the act of letting machines to “see” what is in front of them and decide what an object is — to recognize when and who has taken an item from a shelf. If an item is returned to the shelves, the system can delete it from a customer’s virtual basket. Amazon is able to follow customers in the store at all times by employing a network of cameras, guaranteeing that the proper things are billed to the right buyer as they walk out without needing to utilize face recognition.
Deep learning is at the heart of computer vision. At their most basic, the systems are sophisticated pattern recognition algorithms that allow machines to derive inferences from massive datasets.

There are some human employees working behind the screens at the Go store to help train the algorithms and confirm when they have correctly identified a product. Humans also restock shelves, help with product locations and are employed as fresh food chefs. However, the majority of the data collected by cameras is analyzed in the same way the Amazon Echo recognizes voices, in the firm’s large data centers.

At the Go store, there are some human personnel working behind the scenes to assist train the algorithms and validate when they have successfully identified a product. Humans also refill shelves, assist with product placement, and work as fresh food chefs. However, the majority of the data acquired by cameras is evaluated in the firm’s huge data centers in the same manner that the Amazon Echo identifies voices.

2. Self Driving Cars

Driverless automobiles appear to be a thing of the future, but owing to advancements in autonomous vehicle technology, that future may be just around the horizon. Car manufacturers have been bringing autonomous features to conventional vehicles for some time now, similar to the incremental acceptance of electric cars with hybrid versions reaching the road first and have been introducing driverless features to conventional vehicles for some time now. We already have semi-autonomous vehicles, cars and trucks with cruise control, brake aid, and self-parking technologies on the road today. And industry experts expect that fully autonomous vehicles will be available in a couple of years.

How do self-driving cars see?

To be safe, self-driving cars must be able to recognize nearby objects, other vehicles, pedestrians, road contours, and everything else that human drivers consider when determining how to steer, accelerate, and halt. However, how do machines mimic human vision? They primarily make use of three sensor and imaging technologies: radar, lidar, and cameras.

Radar

RADAR (Radio Detection and Ranging) technology has been utilized in automobiles for many years and is the technology that most typically informs adaptive cruise control and automated emergency braking. It operates by emitting radio waves that bounce off distant surfaces.

Radar’s advantages:

Radar can detect things hundreds of yards away and identify their size and speed.

Radar’s limitations:

Radar technology cannot “see” detail and analyses pictures in very low resolution, which means it cannot identify things.

Lidar

In contrast to radar, Light Imaging Detection and Ranging (LIDAR) scans the surroundings using laser light pulses rather than radio waves. Lidar works by shooting millions of laser pulses each second, which are then reflected off object surfaces and back to a receiver, which produces a three-dimensional representation of the car’s surroundings.

Lidar’s advantages:

Lidar can see in more detail than radar. It can tell if a person is looking forward or backward, or if a two-wheeled vehicle is a bike or a motorbike, allowing the car’s computer to better forecast how any given item would behave.

Lidar’s limitations:

At the moment, lidar is the most costly sensor choice for automakers, and because sensors frequently rotate, it need more moving components. This suggests that there are more chances for things to go wrong. Weather factors also limit the use of lidar. It will not operate in fog or dust, which necessitates the use of a secondary sensor in cars equipped with lidar technology.

Cameras:

To view in high resolution, self-driving cars employ camera technology. Road signs and markings are read using cameras. A variety of lenses surround self-driving cars, offering wide-angle views of close-up surroundings as well as longer, narrower views of what’s ahead.

Cameras’ advantages:

Cameras provide the most precise image of a vehicle’s surroundings. They provide photographs with the best resolution.

Camera limitations:

Cameras do not perform well in all weather conditions, and unlike radar and lidar, which offer numerical data, camera technology requires the computer to gather measurements from an image to determine how far away an item is.

How do self-driving cars function?

1. Creation of a map

To comprehend its surroundings, an autonomous car must create a map of its surroundings and locate itself inside it. Typically, lidar and camera technology are employed to scan the surroundings, and the car’s computer then combines the sensor, GPS, and IMU inputs to generate the map.

2. Map out your route

Path planning is the process of determining the safest and shortest pathways to a location. A driverless automobile must consider not just navigation, but also static and movable impediments, as well as moves like changing lanes and passing other cars. Path planning starts with a long-term plan, similar to the directions we now get when we enter a location into a map program. Then, while the automobile moves, short-term plans are produced and continually improved.

How Artificial Intelligence (AI) Will Transform the Robotics Industry in the Future

AI-powered robots will be able to do difficult and risky tasks in the future. AI robots are going to revolutionize mankind, despite the fact that this technology may appear to be far-fetched.

Managing Dangerous Tasks

Future robots will be capable of doing dangerous tasks such as handling radioactive materials or deactivating explosives. Furthermore, AI robots can function in difficult situations such as highly loud workplaces, searing temperatures, and hazardous environments. As a result, AI robots will save many lives.

In a Shared Workplace, Humans and Robots

In a shared workplace, humans and robots will interact. In fact, some businesses have already begun to set the tone for this transformation by employing robots as shopkeepers. Humans will engage more with autonomous robots that perform activities such as refilling workstations.

Robots will get smarter and more efficient over time, making them safer to work with in the same place. Advances in artificial intelligence for robotics will be key to the revolution required to enable robots to do some complicated cognitive functions. As a result, the trend will be widely adopted.

The Advantages and Disadvantages of AI Robotics in the Future

While AI has the potential to improve company operations and life in general, some futurists have analyzed the potential downsides of AI robots in the future. Simply simply, what would happen if machines outperformed humans in intelligence? Will they be able to topple the present socioeconomic order?

AI robots are undeniably more cost efficient in the long term, but the technology’s intricacy necessitates large initial investments. As the need to build proprietary solutions decreases, this technology will become less expensive for businesses.

The margins of error will continue to shrink as more powerful AI solutions are implemented into robots, making it simpler to accomplish complicated tasks with more regularity and autonomy.

One disadvantage that has received a lot of attention is the possibility of job losses as a result of powerful AI systems in the workplace. In the robotics business, low-skill professions may be assigned entirely to robots, leaving only creative roles.

Pros

Long-term cost-effectiveness
Errors are reduced
Cons of risk mitigation

Lack of imagination
Job losses are a possibility.
High start-up expenses
A major shift has occurred.

Both AI and machine learning are expected to have a significant impact on the robotics sector. It’s clear that the technology is still in its early stages based on existing uses.

Nonetheless, there are no signs that AI and machine learning will relent in their attempts to push the robotics industry’s capabilities to new heights.

--

--

Adwaitkelkar

Passionate about the field of Machine Learning ,Artificial Intelligence and Finance, seeking for opportunities to enhance and apply my knowledge for the same.