"Created by nature, the human skin shows powerful sensing capabilities that have been pursued by scientists for a very long time. However, it is challenging for today's technologies to replicate the spatial arrangement of the complex 3D microstructure of human skin. A research team led by Professor Yihui Zhang from Tsinghua University has developed a three-dimensionally architected electronic skin that mimics human mechanosensation for fully-decoupled sensing of normal force, shear force and strain." #eskin #electronicskin #robotics
William (Bill) Kemp’s Post
More Relevant Posts
-
HMD & AR, strong bkg Integrated Optics & Nanostructures - Editorial Board J. of Future Robot Life - Independent Expert Evaluator EX2014D187986 - H2020 - teacher
"Emotional studies in dogs and cats and their estimation techniques: an engineering perspective", Advanced Robotics, June 13th 2024 Dogs and cats have exceptionally developed sensory systems and abilities to recognize human signals and emotional states. It makes them invaluable in roles such as working dogs and therapyanimals in human society. Understanding each other’s emotional state is essential to working with them effectively. However, the low accuracy of human emotional estimation in dogs and cats is a significant issue. Due to individual differences and cognitive biases affecting human subjective assessments, automatic emotional estimation is crucial. [...]
Emotional studies in dogs and cats and their estimation techniques: an engineering perspective | HMD hot spot
scoop.it
To view or add a comment, sign in
-
Chief Technology & Information Officer | Global Corporate Strategy and Governance | Digital Transformation & Program Management | Senior ICT Consulting | ICT Service Delivery & Operations
Metamaterials are artificial materials engineered to have properties that are not found in nature; usually designed at microscale level to manipulate sound, light, energy propagation, and other physical / mechanical phenomena. Metamaterials can exhibit properties like negative refractive index, which allows them to bend light in unusual ways, leading to applications such as (almost) "invisible" cloaks and super-resolution imaging. In the context of KEF loudspeakers and Dan Clark headphones, for example, metamaterials are utilized in selected products to absorb unwanted sound energy within the acoustical cavity, reducing distortion and improving sound quality, thus helping in creating a more accurate and immersive audio experience for the listeners. Now we have a new metamaterial that is the best shock absorber, with the new efficiency world record of 75% energy absorption (vs. the previous record of 71%). The new material / structure will enable creation of new bumper designs on vehicles, new protective athletic gear (ex. helmets), or - new packing peanuts. And the list goes on for potential uses - the challenge is to find the delicate balance of creating a shape and structure that isn't so hard to damage whatever it's trying to protect, but still strong enough to absorb impacts that come its way. In general, metamaterials will have a wide range of potential applications in various fields, including telecommunications, medicine, aerospace, and energy. #dkantar #dkantarposts #metamaterials
Autonomous robot invents the world's best shock absorber
newatlas.com
To view or add a comment, sign in
-
Biomedical Engineering PhD Candidate at University College Dublin focusing on the design of tactile sensors. Skills in electronics design, multiple programming languages, and machine learning applications.
I am very excited to share my new publication on my PhD work. This paper introduces our LiVec finger and novel optical 6-axis distributed force and displacement sensor for dexterous robotic manipulation. The distributed sensor can precisely estimate the local XYZ force and displacement at ten distinct locations and provide the global XYZ force and torque measurements. Its compact size, comparable to that of a human thumb, and minimal thickness allow seamless integration onto existing robotic fingers, eliminating the need for complex modifications to the gripper. The design uses our novel optical transduction approach of light angle and intensity sensing to infer force and displacement from deformations of the individual sensing units that form the overall sensor, providing distributed six-axis sensing. I want to thank my co-authors, David Cordova Bulens and Stephen Redmond, for their supervision and work on this. Read the paper to find out more, and check out our video of the sensor! https://lnkd.in/girb3f2p, https://lnkd.in/gN43asCP - Video S1: Real-time demonstration of LiVec finger’s sensing capabilities. #Sensors #Distributed #3D #Optical #Robotics
Design, Fabrication, and Characterization of a Novel Optical Six-Axis Distributed Force and Displacement Tactile Sensor for Dexterous Robotic Manipulation
mdpi.com
To view or add a comment, sign in
-
The performance of 3D object detectors is often limited by the diversity and quantity of human annotations in real-world applications such as robotics or autonomous driving. If you want to boost the performance of 3D detectors without the cumbersome process of manual data annotation, make sure to check out the latest research from the #BCAI team in Sunnyvale (CA) in collaboration with Stanford University and the University of Southern California – featured at #NeurIPS23 – #3D Copy- Paste. 🌟 #3D Copy-Paste is an innovative data augmentation technique to automatically generate large scale 3D annotations. It copies 3D objects from an existing large-scale dataset (e.g., #Objaverse) and pastes them into an indoor scene in a psychically plausible manner, while addressing the challenges of collision handling, illumination, shading, and geometric consistency. 🔍 Methodology: The #3D Copy-Paste technique consists of three steps 1) identify all suitable planes for 3D object insertion, 2) estimate the object’s precise pose and location to prevent physical collisions, 3) calculate spatially varying illumination to ensure the coherent integration of the object's lighting and shadows within the scene. 📊 Performance Highlights: With the #3D Copy-Paste data augmentation method, the monocular 3D object detection model, ImvoxelNet, showed a state-of-the-art performance on the SUN RGB-D dataset by achieving a mAP of 43.79%. To learn more about #3D Copy-Paste, read the full research paper here 📑➡ https://bit.ly/41OURiJ #NeurIPS23, #BCAI, #3D Copy-Paste, #BoschResearch Yunhao Ge, Hong-Xing Yu, Henry Cheng Zhao, Yuliang Guo, Xinyu H., Liu Ren, Laurent Itti, Jiajun Wu
To view or add a comment, sign in
-
#snsinstitutions #snsdesignthinkers #designthinking ARTICLE ON : KINEMATICS IN ROBOTIC The study of kinematics in a robotic arm involves examining how each part of the arm moves and its position at any given time. It's like understanding the "anatomy of movement" for robots. By analyzing the kinematics, engineers can design robotic arms that can perform tasks with precision and efficiency. This knowledge is crucial for programming the arm to move accurately and reach specific points in its workspace. 1. Kinematics of a robotic arm involves studying the movement and position of its components. 2. It combines mathematics, physics, and engineering principles. 3. Understanding kinematics helps in designing and controlling robotic arms for various applications. 4. Engineers analyze how each part of the arm moves to ensure precision. 5. Robotic arm kinematics is essential for programming accurate movements. 6. It's like understanding the "anatomy of movement" for robots. 7. Designing robotic arms with efficient movement requires in-depth kinematic knowledge. 8. The study of kinematics is crucial in developing robotic arms for tasks like manufacturing and healthcare. 9. Engineers use kinematics to ensure the arm reaches specific points in its workspace. 10. Precise movement control is achieved through a thorough understanding of kinematics. 11. Kinematics helps in optimizing the performance of robotic arms. 12. Analyzing kinematics allows for the creation of robotic arms with improved efficiency. 13. Robotic arm kinematics is a fascinating field that drives innovation in robotics. 14. Engineers use kinematic principles to enhance the capabilities of robotic arms.
To view or add a comment, sign in
-
Robotics Voice | Resource sharing | Emerging Technology | Advance Research | Automation | Robotics and AI
🤖🤖🤖 3D mapping by monocular camera 🤖🤖🤖 Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. creating 3D maps by a monocular camera can reduce the cost of development make make the robots more reliable. This video is a representation of Deep Virtual Stereo Odometry (DVSO). Deep Virtual Stereo Odometry (DVSO) from Technical University of Munich is a approch to create 3D map by a monocular camera. Deep Virtual Stereo Odometry exceeds previous monocular and deep-learning based methods in accuracy. It even achieves comparable performance to the state-of-the-art stereo methods, while only relying on a single camera. Authors proposed leverage deep monocular depth prediction (StackNet) to overcome limitations of geometry-based monocular visual odometry. To this end, they incorporate deep depth predictions into Direct Sparse Odometry as direct virtual stereo measurements. For depth prediction, they design a novel deep network that refines predicted depth from a single image in a two-stage process. They train their network in a semi-supervised way on photoconsistency in stereo images and on consistency with accurate sparse depth reconstructions from Stereo DSO. Paper & Supplementary Material: https://lnkd.in/gVQ5Df6a #robotics #ros #3dmapping #monocamera #opencv #technology #selfdrivingcars #autonomous
To view or add a comment, sign in
-
🤖What is Camera Calibration?🤖 Camera calibration is the process of determining the intrinsic and extrinsic parameters of a camera to enable accurate mapping of 3D points to 2D image coordinates. Intrinsic Parameters: Definition: Internal characteristics of the camera. Examples: Focal length, principal point, lens distortion coefficients. Extrinsic Parameters: Definition: Position and orientation of the camera in the 3D world. Examples: Rotation matrix, translation vector. Why is it important? Enhances Accuracy. Compensates Distortion. Example: A robotic system uses a camera to navigate and interact with its environment. Outcome: Accurate mapping of the environment for better navigation. Parameters: Intrinsic: Focal length (fx, fy), principal point (cx, cy), lens distortion coefficients (k1, k2, k3, p1, p2). Extrinsic: Rotation matrix (R), translation vector (t). Where it is Used? Computer Vision: Object detection, image stitching, augmented reality, robotics, and 3D reconstruction. How to Do It? Capture Calibration Images: Photograph a known calibration pattern from different angles and distances. Image Analysis: Identify key points on the calibration pattern and extract their 2D coordinates. Calibration Algorithm: Use a calibration algorithm to estimate intrinsic and extrinsic parameters. Optimisation: Minimise the difference between observed and predicted image points. Validation: Evaluate calibration accuracy using additional images or validation patterns. #robotics #cameracalibration #calibration
To view or add a comment, sign in
-
An e-skin that can detect tactile information and produce tactile feedback . In recent years, materials scientists and engineers have introduced increasingly sophisticated materials for robotic and prosthetic applications. This includes a wide range of electronic skins, or e-skins, designed to sense the surrounding environment and artificially reproduce the sense of touch. #TechTrends #TechInnovationsDaily #DigitalFrontiers #FutureTechInsights
March 7th 2024
techxplore.com
To view or add a comment, sign in
-
Adjunct Assistant Professor in Electrical and Computer Engineering (ECE) at Georgia Institute of Technology
An ultrawide field-of-view pinhole compound eye using hemispherical nanowire array for robot vision Insects have compound eyes with a wide field of view and motion-tracking capabilities that have served as inspiration for roboticists. Previous attempts to transfer a microlens array onto a curved surface have suffered from complications during the transfer process. As an alternative approach, Zhou et al. developed a lens-free compound eye by combining a 3D-printed hemispherical pinhole structure with a perovskite nanowire photodetector array. The pinhole compound eye exhibited a wide field of view and could accurately locate targets owing to its good angular selectivity and wide spectrum response in monocular and binocular configurations. To demonstrate dynamic motion tracking, the compound eye was incorporated into a drone and successfully tracked a moving quadrupled robot in real time. Garnering inspiration from biological compound eyes, artificial vision systems boasting a vivid range of diverse visual functional traits have come to the fore recently. However, most of these artificial systems rely on transformable electronics, which suffer from the complexity and constrained geometry of global deformation, as well as potential mismatches between optical and detector units. Here, a unique pinhole compound eye that combines a three-dimensionally printed honeycomb optical structure with a hemispherical, all-solid-state, high-density perovskite nanowire photodetector array is presented. The lens-free pinhole structure can be designed and fabricated with an arbitrary layout to match the underlying image sensor. Optical simulations and imaging results matched well with each other and substantiated the key characteristics and capabilities of our system, which include an ultrawide field of view, accurate target positioning, and motion tracking function. The potential of the unique compound eye for advanced robotic vision was further demonstrated by successfully completing a moving target tracking mission. https://lnkd.in/eFsa9xh8
To view or add a comment, sign in
-
MIT CSAIL researchers enhance robotic precision with sophisticated tactile sensors in the palm and agile fingers, setting the stage for improvements in human-robot interaction and prosthetic technology. | Click below to read the full article on Sunalei
Robotic palm mimics human touch
https://sunalei.org
To view or add a comment, sign in
WNE4 Lab / International Funding
3moInteresting!