Why robotics needs vision, now!
Common mobile robots in industry or logistics use 2D laser rangefinders. These 2D LiDARs output very little information about the surroundings - it is sufficient to detect an obstacle at a certain location, but not enough to understand what it is exactly and how to react in a smart way.
Request A Meeting
Please fill out your details below. Our team will reach out to you by email to schedule a date and time.
Current tech in mobile robotics is not enough anymore
Over the recent years, the industry of mobile robotics and autonomy has been booming. Fully autonomous operation of various ground vehicles revolutionizes logistics, supply chain management, manufacturing, inspection and last-mile delivery. Various kinds of Automated Guided Vehicles (AGVs), Autonomous Mobile Robots (AMRs) and drones deliver improved performance and unexcelled efficiency while considerably reducing the operational costs.
This rapid transformation, relevant for all sectors of industry, wouldn’t be possible without the technological advancements in the fields of sensing technology and autonomy algorithms. Particularly the former has significantly accelerated the autonomy revolution by introducing affordable sensors to the market. Such sensors provide “the eyes” to the robots, telling them where they are and what is located in their surroundings. This information is key to make the robots fully autonomous and safe.
The sensor technologies used by mobile robotics vary considerably, yet common mobile robots in industry or logistics use 2D laser rangefinders (LiDARs), which are used to position the robot and to avoid obstacles. That solution works reliably in certain environments, but falls short in places with repetitive structures and without unique shapes like long corridors, big halls or simply outdoors.
Common mobile robots in industry or logistics use 2D laser rangefinders. These 2D LiDARs output very little information about the surroundings - it is sufficient to detect an obstacle at a certain location, but not enough to understand what it is exactly and how to react in a smart way.
The 2D LiDAR technology doesn’t support non-flat paths, such as ramps, and often fails in dynamic environments (imagine a warehouse that is reshuffled daily) or with passers-by around the robot. As a consequence, the technology cannot be used reliably neither on robots driving in modern industrial settings where autonomous machines are expected to share their operational space with human workers, nor on drones or other types of aerial robots. In the former case, because of the complexity and dynamicity of those environments; in the latter, due to the necessity to use a positioning system that works in 3D.
Furthermore, 2D LiDARs output very little information about the surroundings - it is sufficient to detect an obstacle at a certain location, but not enough to understand what it is exactly and how to react in a smart way.
Computer vision for even smarter robots
The shortcomings of most common sensor types are addressed by cameras, a technology that is a rising star in the mobile robotics industry. First successful deployments of Seegrid material handling AGVs or Starship delivery robots demonstrate that vision-based platforms can be fully reliable and 100% safe. The Sevensense technology, as demonstrated in our cleaning machine and delivery projects, is deemed to bring the vision-based perception to yet another level. The camera images are interpreted using computer vision techniques that output not only very basic range measurements, but also much richer information about the surroundings, such as object shapes, types and their relative motion. Image data is also commonly used by modern deep Artificial Intelligence (AI) algorithms that bring an unprecedented understanding of the environment to robotics and will shape the landscape of industry as well as research in the future.
The shortcomings of most common sensor types are addressed by cameras, a technology that is a rising star in the mobile robotics industry. The benefits of rich information coupled together with artificial intelligence are hard to dismiss.They permit the robots to precisely position themselves, interpret the environment and react accordingly.
The benefits of rich information coupled together with artificial intelligence are hard to dismiss. They permit the robots to precisely position themselves, interpret the environment and react accordingly. For example, they can detect even small, flat obstacles on the floor and avoid them reliably. It is also possible to detect passers-by, predict their intentions and take special safety measures around them. Finally, recognizing barcodes, tags and object types or reading gauges means robots can perform automated inventory and inspection checks at any time when just driving around.
To calculate the position of the robot, the camera-based sensors are often paired with motion sensors (also known as inertial measurement units, IMUs), which provide additional information about the accelerations and rotations of the platform. They bring a supplementary degree of robustness to navigate the robot, particularly in low lighting conditions and during dynamic maneuvers. The combination of cameras and IMUs is similar to human’s anatomy, with our eyes complemented by inner ears that provide a sense of balance and help us stand upright and walk in darkness.
In order to profit the most from the visual-inertial technology, the sensors integrated into the robots need to fulfill strict requirements in terms of quality, calibration, robustness and hardware implementation. A considerable design expertise is essential to make the robots support harsh illumination conditions, non-flat environments and dynamic objects moving around them. The need of centimeter-level accuracy demands precise time-synchronization and state-of-the-art sensors to be used. Finally, modern vision-based robots should generally rely on multiple cameras mounted all around - to have a full 360° field-of-view, to stay robust against occlusions, textureless areas or direct light blinding the sensors and, last but not least, to guarantee the highest reliability by means of sensor redundancy. All of those requirements mark the challenges of robotics of tomorrow - but also all of them can be solved with state-of-the-art computer vision algorithms and high-quality sensing.
Robust sensing using Sevensense hardware
At Sevensense Robotics, we understand the challenges of constructing reliable visual-inertial sensing technology. The core expertise of Sevensense lies in using a combination of cameras and IMUs to provide autonomous navigation, build 3D models and perceive the environment around the robot. After several years working with state-of-the-art algorithms it was clear that the robotics market calls for a sensor that offers a combination of the much-needed features: multi camera support, synchronized visual-inertial data, high-sensitivity image sensors and robotics-specific exposure algorithms.
Alphasense Core is a result of our 3-year endeavour to bring a cutting-edge visual-inertial sensor that is specifically tailored to the needs of mobile robots. It includes up to 8 precisely-synchronized global-shutter cameras. All of them are simultaneously triggered and timestamped in the middle of their exposure time. The cameras use image sensors with high-sensitivity and extremely high dynamic range to guarantee the highest-quality imaging even in very dim conditions. Sevensense hardware also includes a synchronized IMU that augments the visual data and is useful for full 3D mapping and localization using visual-inertial SLAM algorithms. The data from the sensor can be streamed over the Gigabit Ethernet connection - a common interface supported by both, experimental and industrial grade robotic platforms.
Sevensense hardware is a result of our 3-year endeavour to bring a cutting-edge visual-inertial sensor that is specifically tailored to the needs of mobile robots.
The Sevensense hardware sensor supports up to 8 camera modules equipped with 0.4 MPix or 1.6 MPix global-shutter image sensors, either color or grayscale. The cameras can be mounted to the robot in an arrangement that suits the shape of the platform and its final application to the best. The extended version of Sevensense hardware allows its cameras to be placed up to 4.5 meters away from the baseboard of the sensor. Optionally, the camera modules can be supplied in splash-proof and dust-proof cases.
However, calibrating the cameras on the vehicle requires a special toolkit that usually slows down the deployment of R&D platforms. Therefore, we released a Development Kit that features a 5-camera Sevensense hardware mounted on a rigid frame. The unit is delivered pre-calibrated, both intrinsically and extrinsically, and is a perfect way to quickly start using the sensor. For the convenience of fast prototype integration and evaluation, a ROS driver is included to easily deploy it on research and development machines and use hundreds of existing open-source software projects straight away.
How can Sevensense hardware help robots?
The data output by Sevensense hardware provides a very rich representation of the robot’s surroundings that can be used to unlock full autonomous capabilities of mobile robots and to provide understanding of the environment. Such applications of the sensor are well tested given that we are not only a supplier of the Alphasense Core sensor, but also include it in the Sevensense products for robot positioning and autonomous navigation, called Alphasenses Position and Alphasense Autonomy, respectively.
The data output by Sevensense hardware provides a very rich representation of the robot’s surroundings that can be used to unlock full autonomous capabilities of mobile robots and to provide understanding of the environment.
One of the fundamental capabilities to unlock autonomy is precise positioning, that is knowing where the robot is and where it should go. Sevensense hardware perfectly fits this need and is extensively tested with visual-inertial odometry (See the open-source frameworks: maplab, ROVIO, okvis or VINS-Mono), SLAM and place recognition algorithms. Its accurately synchronized global shutter cameras and a low-noise IMU satisfy the stringent requirements for precise estimation.
Another common application of Sevensense hardware is obstacle avoidance and local environment perception. Using up to 8 cameras in arbitrary configurations means it is straightforward to arrange them all around the robot to estimate scene depth, either using stereo vision or monocular depth estimation. Such volumetric mapping enables the algorithms to plan for feasible paths and navigate smartly around the environment to reach the final destination.
The rich visual data from Sevensense hardware can also be used to unlock higher-level understanding of the environment and specific objects around. Common deep learning and AI algorithms enable semantic understanding or segmentation of the scene to perform inspection or manipulation tasks.
Finally, the visual data is very natural to interpret by human operators, which facilitates introspection into the robot operation, surveillance, but also comes useful for increasingly common human-in-the-loop remote operations that can rely on the same sensor suite as autonomy algorithms.
The gist
Sensing technologies and perception algorithms have made enormous progress in recent years and are changing the face of the mobile robotics industry and the landscape of robotics research. The robots are required to be smarter than ever and actually understand their surroundings instead of merely avoiding collisions on the fixed path. The data-hungry AI algorithms call for rich visual data. All of that cannot be satisfied by common 2D LiDARs or marker-based systems, but is delivered by modern visual-inertial technology.
The Sevensense hardware sensor developed by Sevensense Robotics meets the exact needs of the mobile robotics industry and equips the robots with the power of sight, enabling them to profit from cutting-edge algorithms, such as mapping, localization, depth estimation, collision avoidance or semantic understanding.
Whether it’s a ground or aerial platform, Sevensense hardware can augment any mobile robotics application with the highest-quality visual data and make them prepared for the AI revolution and the transition to Industry 4.0.
You can find more informative material about Sevensense hardware on its product page.