WO2022226529A1 - Distributed multi-vehicle localization for gps-denied environments - Google Patents
Distributed multi-vehicle localization for gps-denied environments Download PDFInfo
- Publication number
- WO2022226529A1 WO2022226529A1 PCT/US2022/071860 US2022071860W WO2022226529A1 WO 2022226529 A1 WO2022226529 A1 WO 2022226529A1 US 2022071860 W US2022071860 W US 2022071860W WO 2022226529 A1 WO2022226529 A1 WO 2022226529A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- vehicle
- vehicles
- controller
- positioning system
- Prior art date
Links
- 230000004807 localization Effects 0.000 title description 16
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000004927 fusion Effects 0.000 claims description 10
- 230000003068 static effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 9
- 238000013459 approach Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000238876 Acari Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/026—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
Definitions
- the present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position of each of a plurality of vehicles in the absence of an external signals and information.
- BACKGROUND Vehicles global positioning system utilizes external signals broadcast from a constellation of GPS satellites. The position of a vehicle is determined based on the received signals for navigation and increasingly for autonomous vehicle functions. In some instances, a reliable GPS signal may not be available. However, autonomous vehicle functions still require sufficiently precise positioning information.
- the background description provided herein is for the purpose of generally presenting a context of this disclosure.
- the system includes a controller of at least one vehicle configured to receive image data from at least one monocular camera disposed on the at least one vehicle.
- the controller is further configured to generate a 3D map of an area of interest based on the image data.
- the 3D map includes global coordinates.
- the controller is also configured to provide the 3D map for use by other vehicles for localizing the other vehicles.
- the controller is further configured to observe a landmark in the image and determine the global coordinates based on information associated with the landmark.
- the at least one vehicle may include a plurality of vehicles, with the controller of each vehicle of the plurality of vehicles generating a 3D map and a covariance associated therewith.
- the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and the associated covariances.
- the at least one vehicle comprises a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map.
- the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and based on a consensus algorithm of the 3D maps generated by the controller of the plurality of vehicles.
- the 3D map is a 3D point cloud, wherein each point in the 3D map is referenced to a global coordinate system and has a feature descriptor.
- the controller is configured to provide the 3D map for use by the other vehicles by sending the 3D map to a static entity, the other vehicles accessing the 3D map via the static entity.
- the vehicle localizing method includes receiving image data from at least one monocular camera disposed on the at least one vehicle. A 3D map of an area of interest is generated based on the image data.
- the 3D map includes global coordinate.
- the method further includes providing the 3D map for use by other vehicles for localizing the other vehicles.
- the method may further include observing a landmark in the image and determining the global coordinates based on information associated with the landmark.
- the at least one vehicle may include a plurality of vehicles.
- the controller of each vehicle of the plurality of vehicles generates a 3D map and a covariance associated therewith.
- the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and the associated covariances.
- the at least one vehicle includes a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map, wherein the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and based on a consensus algorithm of the 3D maps generated by the controller of the plurality of vehicles.
- Figure 1 is a schematic view of an example roadway and sign including position and dimension information embedded in a machine-readable optical label.
- Figure 2 is a schematic representation of an example method of determining vehicle position according to an example embodiment.
- Figure 3 is a schematic top view of a vehicle with rear facing camera relative to a sign.
- Figure 4 is a schematic view of a sign with five points and embedded coordinates.
- Figure 5 is an image showing corresponding image points after using a fisheye camera model.
- Figure 6 depicts a flowchart illustrating an operation of a vehicle positioning determining system according to an example embodiment.
- Figure 7 is a schematic block diagram of a vehicle positioning system according to an example embodiment.
- Figure 8 is a depiction of a GPS-denied environment according to an example embodiment. DETAILED DESCRIPTION [0022] Referring to Figure 1, a vehicle 10 is shown schematically along a roadway.
- the vehicle 10 includes a vehicle positioning system 15 that reads information from a machine-readable optic label disposed on a fixed object.
- the optic label includes information regarding the coordinate position of the fixed object and dimensions of a visible symbol or shape on the fixed object.
- the vehicle 10 includes a controller 25 that uses the communicated dimensions to determine a relative position of the vehicle relative to the fixed object 14.
- the position of the fixed object is communicated by the coordinates provided within the optic label 16.
- the position of the vehicle 10 relative to the fixed object is determined based on a difference between the communicated actual dimensions of the visible symbol and dimensions of an image of the visual symbol captured by a camera disposed on the vehicle.
- the example disclosed vehicle positioning system 15 enables a determination of a precise vehicle position without an external signal.
- vehicle 10 includes at least one camera 12 that communicates information to a controller 25. It should be understood, that a device separate from the camera 12 may be utilized to read the optic label. Information from the camera 12 may be limited to capturing the image 22 of the polygonal shape 34.
- the example controller 25 may be a stand-alone controller for the example system and/or contained in software provided in a vehicle controller.
- the camera 12 is shown as one camera, but may be multiple cameras 12 disposed at different locations on the vehicle 10. The camera 12 gathers images of objects along a roadway.
- the example roadway includes a fixed structure, such as for example a road sign 14.
- the example road sign 14 includes a machine-readable optic label 16 that contains information regarding the location of the road sign 14.
- the optic label 16 further includes information regarding actual dimensions of a visible symbol 34.
- the visible symbol is a box 34 surrounding the optic label 16.
- the information regarding the box 34 includes height 20 and width 18.
- the visible symbol is a box 34 with a common height and width 20, 18.
- other polygon shapes with different dimensions could also be utilized and are within the contemplation of this disclosure.
- the size of the captured image 22 will differ from the actual size of the box 34 due to the distance, angle and proximity of the camera 12 relative to the sign 14.
- the differences between the captured image 22 and the actual size of the box 34 are due to the geometric perspective of the camera 12 relative to the box 34.
- the controller 25 uses the known dimensions 20, 18 of the box 34,the corresponding dimensions 24, 26, 28 and 30 of the captured images, and the camera's focal point 32 to determine the distance and orientation relative to the sign 14.
- the distance and orientation are utilized to precisely position the vehicle 12 relative to the sign 14 and thereby a precise set of coordinates.
- the distance and orientation between the sign and the camera's focal point 32 is determined utilizing projective geometric transformations based on the dimensions of the captured image 22 as compared to the actual dimensions communicated by the optic label 16.
- the captured image 22 is a perspective view of the actual box 34.
- the geometry that results in the dimensions of the captured image 22 resulting from the orientation of the vehicle 10 relative to the actual box 34 are determinable by known and understood predictive perspective geometric transform methods. Accordingly, the example system 15 determines the distance and orientation of the focal point 32 relative to the sign 14 given the perspective view represented by the captured image 22 of the known box 34 geometry.
- the optic label 16 is a QR code or two-dimensional bar code. It should be appreciated, that the optic label 16 may be any type of machine-readable labels such as an example bar code.
- the example system 16 is disclosed by way of example as part of motor vehicle 10, the example system 15 may be adapted to other applications including other vehicles and hand held devices.
- the example disclosed system and method of positioning and localization uses computer readable labels and projective geometry to determine the distance between the camera's focal point and the sign that is then utilized to determine a position of the vehicle.
- the computer readable image includes is encoded with a position coordinate (e.g. GPS coordinates) and actual physical dimensions of an accompanying polygon (e.g. bounding box) inside of an encoded computer readable label (e.g. QR or Bar Code) on a sign or fixed surface the viewing object is able to read and interpret the position coordinate and polygon dimensions and perform a projective geometric transformation using its own perspective dimensions observed of the polygon in conjunction with the known polygon dimensions.
- a position coordinate e.g. GPS coordinates
- actual physical dimensions of an accompanying polygon e.g. bounding box
- an encoded computer readable label e.g. QR or Bar Code
- FIG. 3 and 4 an example method of localization of a vehicle 10 from a sign 14 with embedded coordinates 16 is schematically shown.
- the sign 14 includes multiple points known physical dimensions and coordinates. These points are show by way of example as p 1, p 2, ..., p n..
- the vehicle 10 has some unknown position and orientation with respect to the world and the sign 14. We can represent this with a position vector p and a rotation matrix R. This combination, (p, R), involves 6 unknown variables (e.g. 3 position components and 3 Euler angles).
- the vehicle 10 has a camera 12 which images the points on the sign 14. The points in the image have only 2 components. Let these points be p 1 , p 2 ... p n .
- the indices indicate corresponding sign points (3 components) and image points (2 components).
- the camera 12 has some intrinsic and extrinsic parameters. If the camera 12 is calibrated, then these are all known. These will be included in the map P. [0035] A set of equations can be written as shown in the below examples: [0036] The present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position in the absence of an external signals and information. [0037] A total of 2n equations and 6 unknowns (p, R). At least three sign points are needed to determine the vehicle position and orientation. In this example disclosed specific embodiment, a fisheye camera is mounted on the rear of a truck.
- the example truck 10 is located at the origin of a coordinate system, and the vehicle longitudinal axis is aligned with the x-axis. It should be appreciated, that such an alignment is provided by way of example and would not necessarily be the typical case.
- the example coordinates include latitude, longitude and height, and may be converted to a local Cartesian coordinate system. In this disclosed example, the conversion to a Cartesian coordinate system is done.
- the sign includes 5 points: the center of a rectangle and its 4 corners.
- the rectangle is 30 cm wide and 50 cm tall.
- the setup is illustrated in Figure 3, where a vehicle “sees” the sign 14 with its rear facing camera 12 (e.g. backup camera).
- Figure 4 shows the example sign 14 with the 5 points, which have their world coordinates embedded in the optic label 16.
- Figure 5 shows the resulting image points when using the setup described, along with a specific fisheye camera model with specific extrinsic and intrinsic parameters.
- Table 1 shows some example data generated by a model of a vehicle with an attached rear camera.
- Another disclosed example method of determining a vehicle position with the example system includes a one-shot approach.
- a one-shot approach enables a determination of the vehicle position/orientation from a single measurement of a sign with multiple known points. As shown in Figure 4, there are multiple points on the sign with known world coordinates. For example, the sing includes points be p1, p2... pn.
- the vehicle 10 has some unknown position and orientation with respect to the world and the sign.
- the vehicle position is represented with a position vector p and a rotation matrix R.
- the combination of the position vector and the rotation matrix, (p, R), provides 6 unknown variables (e.g.3 position components and 3 Euler angles).
- the example vehicle has a camera which images the points on the sign. The points in the image have only 2 components. For example, the points are: [0046] [0047] The indices indicate corresponding sign points (3 components) and image points (2 components).
- the camera 12 has some intrinsic and extrinsic parameters. The example camera 12 is calibrated and therefore the intrinsic and extrinsic parameters are all known. The intrinsic and extrinsic parameters are included in the map P. From the above known parameters, the following set of equations can be written: [0048] The example method provides a total of 2n equations and 6 unknowns (p, R).
- At least 3 sign points are needed to determine the vehicle position and orientation.
- 3 sign points are utilized in this disclosed example, more points maybe utilized within the contemplation and scope of this disclosure.
- Another disclosed example approach is to use one or more points of known locations and track those points over time as the vehicle moves. When points are tracked, it may be possible to utilize fewer than 3 points due to the use of a time history.
- Vehicle relative motion is calculated, based on measured wheel rotations, steering wheel angle, vehicle speed, vehicle yaw rate, and possibly other vehicle data (e.g. IMU).
- the vehicle information is combined with a vehicle model. By combining the motion of the point(s) in the image with the relative motion of the vehicle, over time, the vehicle position and orientation can be determined.
- the disclosed system enables camera and computer vision system to derive a precise position by viewing a sign and determining an offset from the sign.
- the vehicle 10 uses its computer vision-based algorithm to detect, recognize and interpret traffic signs, such as the road sign 14 having GPS coordinates registered in a database.
- the vehicle 10 uses the road/traffic sign 14 as a fixed reference and correct its localization by computing its distance to the traffic sign 14 using its known geometry (e.g., projective geometry transform).
- the vehicle 10 determines its pose (position and orientation) using the vehicle localization.
- correcting the localization makes sense only if the map (predefined or constructed) of the environment is also reliable.
- the vehicle positioning system 15 in the vehicle 10 and in other neighboring vehicles which may form a vehicle group to localize the vehicles in a map for GPS-denied environments.
- the vehicle positioning system 15 of each vehicle uses landmarks (e.g., traffic signals or QR codes), at least one monocular camera, and communication technologies that allow the vehicles to communicate, either with each other as part of a vehicle group or with a master entity 74 that is not located in or on a vehicle of the vehicle group.
- a map is built or generated by vehicles 10 in the vehicle group.
- Each vehicle 10 in the group contributes to the map building by providing information generated at least partly from image data captured by the vehicle's (monocular) camera(s) 12 of local surroundings and landmarks.
- a vehicle 10 may choose to reuse the available, built map and localize itself using the information of the descriptors that each point in the map possesses. This localization, given an existing map, may be done efficiently via a Bag of Words algorithm to match between features in its local map with the built map.
- the vehicle positioning system 15 will be described in connection with a group of neighboring vehicles 10 which form a vehicle group.
- the group may be a group of two or more vehicles 10. However, it is understood that the group of vehicles 10 may be a single vehicle 10.
- the vehicle positioning system 15 of each neighboring vehicle 10 in the vehicle group includes a map generation block 70 which builds or generates a map.
- the map generation block 70 of each vehicle 10 identifies feature points, i.e., pixels extracted from images captured by the camera 12 of the corresponding vehicle 10 which are associated with, for example, a corner of an object appearing in the images. Using the identified feature points, the map generation block 70 of each vehicle 10 constructs a three dimensional (3D) map of the area of interest (e.g., a tunnel) via observations of local surroundings and landmarks. The observations are done with a monocular camera or more cameras, if available, mounted to the vehicle(s) 10.
- feature points i.e., pixels extracted from images captured by the camera 12 of the corresponding vehicle 10 which are associated with, for example, a corner of an object appearing in the images.
- 3D three dimensional
- Generating the 3D map involves generating a point cloud by executing one of a visual odometry (VO) algorithm, a simultaneous localization and mapping (SLAM) algorithm, or a structure from motion (SfM) algorithm, from which the 3D map is created.
- the map generation block 70 generates a point cloud using the SLAM or SfM algorithm.
- the 3D map is a local 3D map which is up to scale because the scale is not observable using only a monocular camera(s) 12. However, the scale is recoverable when the controller 25 observes and identifies a landmark. In this case, the point cloud of a landmark and/or fixed object, such as the road sign 14, is available to the vehicle 10 and the point cloud contains the scale.
- a group of vehicles 10 builds the map.
- a fixed and/or static master or master entity 74 collects the map information from each vehicle 10 in the group which creates a corrected local map as discussed above, and fuses together all of the collected map information.
- a bundle adjustment approach may be used to fuse together the map information to create a global map for subsequent reuse by other vehicles for localizing same.
- Each vehicle 10 which contributes to building the global map reports the covariance of its observations, i.e., the 3D point cloud, to the master entity 74.
- the covariance is used by the master entity 74 in fusing all of the collected maps.
- the global map may be deemed to be ready for use upon the map meeting a predefined criterion, such as having a predetermined low covariance.
- the global map may be updated periodically by the map generation block 70 of a vehicle.
- the master entity 74 In updating the global map, the master entity 74, which maintains the global map, communicates the global map to a group of vehicles, which may be a different group than the group which collaborated with or was otherwise involved in the prior creation of the global map, and requests that each vehicle report to the master entity 74 the vehicle's observations.
- the observations may include or be associated with, for example, a comparison between the vehicle's local map to the global map.
- the master entity 74 is a server which is located remotely from each vehicle 15 and is able to wirelessly communicate with each vehicle 10 in the vehicle group. In the instance in which the area of interest pertains to a tunnel, the static server may be located in the tunnel.
- each vehicle 10 sends the information to a master entity/server
- the vehicles 10 interact only with the neighboring vehicles 10 in a vehicle group.
- the vehicle group is able to construct, via a consensus algorithm, a global map which is the result of the fusion of all individual maps.
- This alternative example embodiments works when a master entity/server is not available to fuse together collected map information from the vehicles 10.
- map Reuse Block The second function block of the vehicle positioning system 15 of each vehicle 10 is a map reuse block 72 which, as discussed above, may be used by a vehicle 10 for localization.
- the shared, previously generated map is a 3D point cloud and/or 3D point cloud map, where each point in the map is referenced to a global coordinate system and each point has a feature descriptor (e.g., SIFT, SURF, ORB, HOG, etc.).
- the map reuse block 72 computes a feature descriptor, which is a unique identifier constructed from the feature neighborhood.
- the map reuse block 72 of each vehicle 10 has the ability to decide whether to use the shared map for localizing the vehicle 10 using the information of the descriptors that each point possesses.
- a description of the operation of the vehicle positioning system 15 is described in Fig. 6 according to an example embodiment.
- the camera 12 of a vehicle 10 captures images of the vehicle's environment at 602.
- Some of the captured images includes representations of a landmark (e.g., a road sign 14 as discussed above).
- the controller 25 may detect in the captured images a representation of the landmark, and decode map information associated with the landmark including the scale at 604.
- the controller 25 generates a local 3D map and, based upon the point cloud map provided by the landmark, the local 3D map is corrected and/or transformed to include global coordinates.
- vehicle 10 sends its corrected local 3D map to the server at 608, and the other vehicles in the vehicle group do the same.
- the vehicle 10 may also report to the server the covariance of its observations.
- the server fuses the collected local maps at 610 and fuses the collected maps to create a global map.
- the server may determine that the global map is available for use following the global map meeting a predetermined criteria, such as having a relatively low covariance.
- a vehicle 10 from the vehicle group interacts with the other vehicles in the vehicle group at 612.
- Each vehicle 10 in the group shares its local/global maps perceived by the vehicle.
- the global map, once created, is sent at 614 to a server or the like (which is not configured to fuse together collected local maps to create a global map) and may be accessed by other vehicles entering the area of interest.
- a vehicle may choose at 616 whether or not to reuse the global map and localize itself.
- the localization uses the information of the descriptors that each point in the map possesses. This localization, using an existing global map, may be performed efficiently via, for example, a Bag of Words algorithm.
- a single vehicle 10 creates a global map having a global coordinate system through identifying a landmark and using global coordinates as described above. The global map may then be sent to a master entity 74 and/or server and maintained for use by other vehicles to facilitate the localization thereof.
- the map generation block 70 and the map reuse block 72 may be implemented in software stored in non-transitory memory having instructions which, when executed by the controller 25 causes the controller to perform the method described above and illustrated in Fig. 6.
- the software and/or algorithms embodied in the software includes as inputs: image sequences captured by camera(s) 12; and optionally vehicle information such as vehicle wheel ticks, information provided by an inertial measurement unit (IMU), and steering wheel angles.
- vehicle information such as vehicle wheel ticks, information provided by an inertial measurement unit (IMU), and steering wheel angles.
- the software and/or algorithms generate as outputs vehicle localization in a global coordinate system and a sparse 3D map with point descriptors and scale.
- the software and/or algorithm operates based upon the following assumptions: 1) known intrinsic/extrinsic parameters of the camera(s) 12 located on the vehicle 10; 2) landmarks, e.g., traffic signals or QR codes; the shape and size of the landmark being known by the vehicle that observes it; and 3) each landmark includes information in a database that is accessible to a vehicle 12 which observes the landmark, the information including GPS position of the landmark which does not change over time, and a point cloud of the landmark in which every point cloud point has a visual descriptor (SIFT, SURF, ORB, HOG, etc.) with the point cloud being constructed using world or global scale/coordinates.
- SIFT visual descriptor
- the master entity 74 may report when the global map is available for access. Once available, a vehicle 10 may opt to reuse the map for its own localization. [0070] It is noted that the more the scene is seen by vehicles 10, the better the global map becomes. If the environment changes, such as the placement of a road sign 14 or other landmark, the global map is updated accordingly. [0071] The vehicle positioning system localizes a group of vehicles in a GPS- denied environment by using landmarks (e.g., traffic signals or QR codes) and at least a monocular camera. [0072] An advantage of the present vehicle positioning system 15 is that the global map is built and reused without the use of more expensive sensors, such as LIDAR sensors.
- landmarks e.g., traffic signals or QR codes
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
A vehicle positioning system, method and software product for a vehicle are disclosed. The system includes camera disposed on a vehicle for reading an optic label disposed on a landmark or fixed object and capturing an image of a shape also disposed on the landmark. A controller is configured to correct a local map of the vehicle at least partly based upon map information read from the optic label so that the corrected local map references a global coordinate system. The controller is further configured to selectively share the corrected local map with neighboring vehicles within a communication range to collaborate with the neighboring vehicles in constructing a global map, and make the constructed global map available for subsequent reuse.
Description
DISTRIBUTED MULTI-VEHICLE LOCALIZATION FOR GPS-DENIED ENVIRONMENTS TECHNICAL FIELD [0001] The present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position of each of a plurality of vehicles in the absence of an external signals and information. BACKGROUND [0002] Vehicles global positioning system utilizes external signals broadcast from a constellation of GPS satellites. The position of a vehicle is determined based on the received signals for navigation and increasingly for autonomous vehicle functions. In some instances, a reliable GPS signal may not be available. However, autonomous vehicle functions still require sufficiently precise positioning information. [0003] The background description provided herein is for the purpose of generally presenting a context of this disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. SUMMARY [0004] A vehicle positioning system and method are disclosed. According to an example embodiment, the system includes a controller of at least one vehicle configured to receive image data from at least one monocular camera disposed on the at least one vehicle. The controller is further configured to generate a 3D map of an area of interest based on the image data. The 3D map includes global coordinates. The controller is also configured to provide the 3D map for use by other vehicles for localizing the other vehicles. [0005] The controller is further configured to observe a landmark in the image and determine the global coordinates based on information associated with the landmark. [0006] In one aspect, the at least one vehicle may include a plurality of vehicles, with the controller of each vehicle of the plurality of vehicles generating a 3D map and a
covariance associated therewith. The 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and the associated covariances. [0007] In another aspect, the at least one vehicle comprises a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map. The 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and based on a consensus algorithm of the 3D maps generated by the controller of the plurality of vehicles. [0008] The 3D map is a 3D point cloud, wherein each point in the 3D map is referenced to a global coordinate system and has a feature descriptor. [0009] The controller is configured to provide the 3D map for use by the other vehicles by sending the 3D map to a static entity, the other vehicles accessing the 3D map via the static entity. [0010] The vehicle localizing method includes receiving image data from at least one monocular camera disposed on the at least one vehicle. A 3D map of an area of interest is generated based on the image data. The 3D map includes global coordinate. The method further includes providing the 3D map for use by other vehicles for localizing the other vehicles. [0011] The method may further include observing a landmark in the image and determining the global coordinates based on information associated with the landmark. [0012] In one aspect of the method, the at least one vehicle may include a plurality of vehicles. The controller of each vehicle of the plurality of vehicles generates a 3D map and a covariance associated therewith. The 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and the associated covariances. [0013] In another aspect of the method the at least one vehicle includes a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map, wherein the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and based on a consensus algorithm of the 3D maps generated by the controller of the plurality of vehicles.
BRIEF DESCRIPTION OF THE DRAWINGS [0014] Figure 1 is a schematic view of an example roadway and sign including position and dimension information embedded in a machine-readable optical label. [0015] Figure 2 is a schematic representation of an example method of determining vehicle position according to an example embodiment. [0016] Figure 3 is a schematic top view of a vehicle with rear facing camera relative to a sign. [0017] Figure 4 is a schematic view of a sign with five points and embedded coordinates. [0018] Figure 5 is an image showing corresponding image points after using a fisheye camera model. [0019] Figure 6 depicts a flowchart illustrating an operation of a vehicle positioning determining system according to an example embodiment. [0020] Figure 7 is a schematic block diagram of a vehicle positioning system according to an example embodiment. [0021] Figure 8 is a depiction of a GPS-denied environment according to an example embodiment. DETAILED DESCRIPTION [0022] Referring to Figure 1, a vehicle 10 is shown schematically along a roadway. The vehicle 10 includes a vehicle positioning system 15 that reads information from a machine-readable optic label disposed on a fixed object. The optic label includes information regarding the coordinate position of the fixed object and dimensions of a visible symbol or shape on the fixed object. [0023] The vehicle 10 includes a controller 25 that uses the communicated dimensions to determine a relative position of the vehicle relative to the fixed object 14. The position of the fixed object is communicated by the coordinates provided within the optic label 16. The position of the vehicle 10 relative to the fixed object is determined based on a
difference between the communicated actual dimensions of the visible symbol and dimensions of an image of the visual symbol captured by a camera disposed on the vehicle. [0024] Accordingly, the example disclosed vehicle positioning system 15 enables a determination of a precise vehicle position without an external signal. In cases where GPS radio signals are not accessible (urban settings, forests, tunnels and inside parking structures) there are limited ways to precisely identify an objects position. The disclosed system 15 and method provides an alternative means for determining a position of an object. [0025] In the disclosed example, vehicle 10 includes at least one camera 12 that communicates information to a controller 25. It should be understood, that a device separate from the camera 12 may be utilized to read the optic label. Information from the camera 12 may be limited to capturing the image 22 of the polygonal shape 34. The example controller 25 may be a stand-alone controller for the example system and/or contained in software provided in a vehicle controller. The camera 12 is shown as one camera, but may be multiple cameras 12 disposed at different locations on the vehicle 10. The camera 12 gathers images of objects along a roadway. [0026] The example roadway includes a fixed structure, such as for example a road sign 14. The example road sign 14 includes a machine-readable optic label 16 that contains information regarding the location of the road sign 14. The optic label 16 further includes information regarding actual dimensions of a visible symbol 34. In this disclosed example, the visible symbol is a box 34 surrounding the optic label 16. The information regarding the box 34 includes height 20 and width 18. In this example, the visible symbol is a box 34 with a common height and width 20, 18. However, other polygon shapes with different dimensions could also be utilized and are within the contemplation of this disclosure. [0027] The camera 12 captures an image 22 of the box 34 and communicates that captured image 22 to the controller 25. The size of the captured image 22 will differ from the actual size of the box 34 due to the distance, angle and proximity of the camera 12 relative to the sign 14. The differences between the captured image 22 and the actual size of the box 34 are due to the geometric perspective of the camera 12 relative to the box 34. The controller 25 uses the known dimensions 20, 18 of the box 34,the corresponding dimensions 24, 26, 28 and 30 of the captured images, and the camera's focal point 32 to determine the distance and
orientation relative to the sign 14. The distance and orientation are utilized to precisely position the vehicle 12 relative to the sign 14 and thereby a precise set of coordinates. The distance and orientation between the sign and the camera's focal point 32 is determined utilizing projective geometric transformations based on the dimensions of the captured image 22 as compared to the actual dimensions communicated by the optic label 16. [0028] The captured image 22 is a perspective view of the actual box 34. The geometry that results in the dimensions of the captured image 22 resulting from the orientation of the vehicle 10 relative to the actual box 34 are determinable by known and understood predictive perspective geometric transform methods. Accordingly, the example system 15 determines the distance and orientation of the focal point 32 relative to the sign 14 given the perspective view represented by the captured image 22 of the known box 34 geometry. [0029] In this example, the optic label 16 is a QR code or two-dimensional bar code. It should be appreciated, that the optic label 16 may be any type of machine-readable labels such as an example bar code. Moreover, although the example system 16 is disclosed by way of example as part of motor vehicle 10, the example system 15 may be adapted to other applications including other vehicles and hand held devices. [0030] Accordingly, the example disclosed system and method of positioning and localization uses computer readable labels and projective geometry to determine the distance between the camera's focal point and the sign that is then utilized to determine a position of the vehicle. The computer readable image includes is encoded with a position coordinate (e.g. GPS coordinates) and actual physical dimensions of an accompanying polygon (e.g. bounding box) inside of an encoded computer readable label (e.g. QR or Bar Code) on a sign or fixed surface the viewing object is able to read and interpret the position coordinate and polygon dimensions and perform a projective geometric transformation using its own perspective dimensions observed of the polygon in conjunction with the known polygon dimensions. [0031] Referring to Figures 3 and 4, an example method of localization of a vehicle 10 from a sign 14 with embedded coordinates 16 is schematically shown. The sign 14 includes multiple points known physical dimensions and coordinates. These points are show by way of example as p1, p2, …, pn..
[0032] The vehicle 10 has some unknown position and orientation with respect to the world and the sign 14. We can represent this with a position vector p and a rotation matrix R. This combination, (p, R), involves 6 unknown variables (e.g. 3 position components and 3 Euler angles). [0033] The vehicle 10 has a camera 12 which images the points on the sign 14. The points in the image have only 2 components. Let these points be p 1 , p 2 … p n . The indices indicate corresponding sign points (3 components) and image points (2 components). [0034] The camera 12 has some intrinsic and extrinsic parameters. If the camera 12 is calibrated, then these are all known. These will be included in the map P. [0035] A set of equations can be written as shown in the below examples:
[0036] The present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position in the absence of an external signals and information. [0037] A total of 2n equations and 6 unknowns (p, R). At least three sign points are needed to determine the vehicle position and orientation. In this example disclosed specific embodiment, a fisheye camera is mounted on the rear of a truck. As appreciated, although a fisheye camera is disclosed by way of example, other camera configurations could be utilized and are within the contemplation of this disclosure. Moreover, although the example camera is mounted at the rear of the truck, other locations on the vehicle may also be utilized within the contemplation and scope of this disclosure. [0038] The example truck 10 is located at the origin of a coordinate system, and the vehicle longitudinal axis is aligned with the x-axis. It should be appreciated, that such an alignment is provided by way of example and would not necessarily be the typical case. Note that the example coordinates include latitude, longitude and height, and may be converted to a local Cartesian coordinate system. In this disclosed example, the conversion to a Cartesian coordinate system is done.
[0039] In this disclosed example, the sign is 10 m behind the truck (world x = - 10). In this example the sign includes 5 points: the center of a rectangle and its 4 corners. The rectangle is 30 cm wide and 50 cm tall. [0040] The setup is illustrated in Figure 3, where a vehicle “sees” the sign 14 with its rear facing camera 12 (e.g. backup camera). Figure 4 shows the example sign 14 with the 5 points, which have their world coordinates embedded in the optic label 16. Figure 5 shows the resulting image points when using the setup described, along with a specific fisheye camera model with specific extrinsic and intrinsic parameters. [0041] Table 1 shows some example data generated by a model of a vehicle with an attached rear camera. In this case, 5 points are included, although more or fewer could be used.
[0042] Table 1: Example coordinates of 5 points and corresponding image coordinates. [0043] Another disclosed example method of determining a vehicle position with the example system includes a one-shot approach. A one-shot approach enables a determination of the vehicle position/orientation from a single measurement of a sign with multiple known points. As shown in Figure 4, there are multiple points on the sign with known world coordinates. For example, the sing includes points be p1, p2... pn. [0044] The vehicle 10 has some unknown position and orientation with respect to the world and the sign. The vehicle position is represented with a position vector p and a rotation matrix R. The combination of the position vector and the rotation matrix, (p, R), provides 6 unknown variables (e.g.3 position components and 3 Euler angles). [0045] The example vehicle has a camera which images the points on the sign. The points in the image have only 2 components. For example, the points are: [0046]
[0047] The indices indicate corresponding sign points (3 components) and image points (2 components). The camera 12 has some intrinsic and extrinsic parameters. The example camera 12 is calibrated and therefore the intrinsic and extrinsic parameters are all known. The intrinsic and extrinsic parameters are included in the map P. From the above known parameters, the following set of equations can be written:
[0048] The example method provides a total of 2n equations and 6 unknowns (p, R). Accordingly, at least 3 sign points are needed to determine the vehicle position and orientation. As appreciated, although 3 sign points are utilized in this disclosed example, more points maybe utilized within the contemplation and scope of this disclosure. [0049] Another disclosed example approach is to use one or more points of known locations and track those points over time as the vehicle moves. When points are tracked, it may be possible to utilize fewer than 3 points due to the use of a time history. [0050] Vehicle relative motion is calculated, based on measured wheel rotations, steering wheel angle, vehicle speed, vehicle yaw rate, and possibly other vehicle data (e.g. IMU). The vehicle information is combined with a vehicle model. By combining the motion of the point(s) in the image with the relative motion of the vehicle, over time, the vehicle position and orientation can be determined. Once convergence to the correct position and orientation has occurred, the correct position and orientation can be maintained if the known points are still being tracked. [0051] Another approach to solve this problem would be a Kalman filter or other nonlinear observer. The unknown states would be the vehicle position and orientation. [0052] As mentioned earlier, a vehicle model could be used to predict future states from current states. The measurement would consist of the image coordinate(s) of the known point position(s) on the sign. Other methods also exist to solve this type of problem, such as nonlinear least squares or optimization methods. [0053] The disclosed system enables camera and computer vision system to derive a precise position by viewing a sign and determining an offset from the sign.
[0054] In the above example embodiments, the vehicle 10 uses its computer vision-based algorithm to detect, recognize and interpret traffic signs, such as the road sign 14 having GPS coordinates registered in a database. The vehicle 10 uses the road/traffic sign 14 as a fixed reference and correct its localization by computing its distance to the traffic sign 14 using its known geometry (e.g., projective geometry transform). The vehicle 10 determines its pose (position and orientation) using the vehicle localization. However, correcting the localization makes sense only if the map (predefined or constructed) of the environment is also reliable. [0055] According to another example embodiment, the vehicle positioning system 15 in the vehicle 10 and in other neighboring vehicles which may form a vehicle group to localize the vehicles in a map for GPS-denied environments. The vehicle positioning system 15 of each vehicle uses landmarks (e.g., traffic signals or QR codes), at least one monocular camera, and communication technologies that allow the vehicles to communicate, either with each other as part of a vehicle group or with a master entity 74 that is not located in or on a vehicle of the vehicle group. [0056] In an example embodiment, there are two aspects or function blocks and/or algorithms (hereinafter simply “function blocks”) to the vehicle positioning system 15. Referring to Fig. 6, in a first function block 70, a map is built or generated by vehicles 10 in the vehicle group. Each vehicle 10 in the group contributes to the map building by providing information generated at least partly from image data captured by the vehicle's (monocular) camera(s) 12 of local surroundings and landmarks. In a second function block 72, the system 15, a vehicle 10 may choose to reuse the available, built map and localize itself using the information of the descriptors that each point in the map possesses. This localization, given an existing map, may be done efficiently via a Bag of Words algorithm to match between features in its local map with the built map. [0057] The vehicle positioning system 15 will be described in connection with a group of neighboring vehicles 10 which form a vehicle group. The group may be a group of two or more vehicles 10. However, it is understood that the group of vehicles 10 may be a single vehicle 10. [0058] Specifically, the vehicle positioning system 15 of each neighboring vehicle 10 in the vehicle group includes a map generation block 70 which builds or generates a map.
The map generation block 70 of each vehicle 10 identifies feature points, i.e., pixels extracted from images captured by the camera 12 of the corresponding vehicle 10 which are associated with, for example, a corner of an object appearing in the images. Using the identified feature points, the map generation block 70 of each vehicle 10 constructs a three dimensional (3D) map of the area of interest (e.g., a tunnel) via observations of local surroundings and landmarks. The observations are done with a monocular camera or more cameras, if available, mounted to the vehicle(s) 10. Generating the 3D map involves generating a point cloud by executing one of a visual odometry (VO) algorithm, a simultaneous localization and mapping (SLAM) algorithm, or a structure from motion (SfM) algorithm, from which the 3D map is created. In one implementation, the map generation block 70 generates a point cloud using the SLAM or SfM algorithm. The 3D map is a local 3D map which is up to scale because the scale is not observable using only a monocular camera(s) 12. However, the scale is recoverable when the controller 25 observes and identifies a landmark. In this case, the point cloud of a landmark and/or fixed object, such as the road sign 14, is available to the vehicle 10 and the point cloud contains the scale. As a result, when the vehicle positioning system 15 of a vehicle 10 identifies a landmark, such as with a neural network located in the vehicle, the system incorporates it in its local map by using a bundle adjustment approach to correct its local map scale. In this way, when the vehicle positioning system 15 of a vehicle 10 observes a landmark, the local coordinate system referenced in the local map is transformed to reference a global coordinate system. [0059] In an example embodiment, a group of vehicles 10 builds the map. A fixed and/or static master or master entity 74 collects the map information from each vehicle 10 in the group which creates a corrected local map as discussed above, and fuses together all of the collected map information. A bundle adjustment approach may be used to fuse together the map information to create a global map for subsequent reuse by other vehicles for localizing same. Each vehicle 10 which contributes to building the global map reports the covariance of its observations, i.e., the 3D point cloud, to the master entity 74. The covariance is used by the master entity 74 in fusing all of the collected maps. The global map may be deemed to be ready for use upon the map meeting a predefined criterion, such as having a predetermined low covariance.
[0060] The global map may be updated periodically by the map generation block 70 of a vehicle. In updating the global map, the master entity 74, which maintains the global map, communicates the global map to a group of vehicles, which may be a different group than the group which collaborated with or was otherwise involved in the prior creation of the global map, and requests that each vehicle report to the master entity 74 the vehicle's observations. The observations may include or be associated with, for example, a comparison between the vehicle's local map to the global map. [0061] In one example embodiment, the master entity 74 is a server which is located remotely from each vehicle 15 and is able to wirelessly communicate with each vehicle 10 in the vehicle group. In the instance in which the area of interest pertains to a tunnel, the static server may be located in the tunnel. FIG. 8 illustrates vehicles 10 traveling along a roadway R in a GPS-denied environment, such as in a tunnel, with the master entity/server 74 positioned therein. [0062] In an alternative example embodiment, instead of each vehicle 10 sending the information to a master entity/server, the vehicles 10 interact only with the neighboring vehicles 10 in a vehicle group. By sharing corrected local/global maps perceived by each vehicle 10, the vehicle group is able to construct, via a consensus algorithm, a global map which is the result of the fusion of all individual maps. This alternative example embodiments works when a master entity/server is not available to fuse together collected map information from the vehicles 10. This approach is more robust to failures, vulnerabilities and cyber-attacks because a group of vehicles participate in the fusion and/or global map creation instead of a single entity. When the fused map is created, the map may be transmitted to and stored in a server or other device that is remote from the vehicles 10 and at least in proximity with the area of interest, for access by other vehicles within a communication range in the area of interest. Map Reuse Block [0063] The second function block of the vehicle positioning system 15 of each vehicle 10 is a map reuse block 72 which, as discussed above, may be used by a vehicle 10 for localization. The shared, previously generated map is a 3D point cloud and/or 3D point cloud map, where each point in the map is referenced to a global coordinate system and each point has a feature descriptor (e.g., SIFT, SURF, ORB, HOG, etc.). For each feature point,
the map reuse block 72 computes a feature descriptor, which is a unique identifier constructed from the feature neighborhood. The map reuse block 72 of each vehicle 10 has the ability to decide whether to use the shared map for localizing the vehicle 10 using the information of the descriptors that each point possesses. [0064] A description of the operation of the vehicle positioning system 15 is described in Fig. 6 according to an example embodiment. The camera 12 of a vehicle 10 captures images of the vehicle's environment at 602. Some of the captured images includes representations of a landmark (e.g., a road sign 14 as discussed above). The controller 25 may detect in the captured images a representation of the landmark, and decode map information associated with the landmark including the scale at 604. At 606, the controller 25 generates a local 3D map and, based upon the point cloud map provided by the landmark, the local 3D map is corrected and/or transformed to include global coordinates. [0065] In the event a server or other computational device is available as the master entity 74 that is located remote from the vehicle group, vehicle 10 sends its corrected local 3D map to the server at 608, and the other vehicles in the vehicle group do the same. The vehicle 10 may also report to the server the covariance of its observations. The server fuses the collected local maps at 610 and fuses the collected maps to create a global map. The server may determine that the global map is available for use following the global map meeting a predetermined criteria, such as having a relatively low covariance. [0066] Alternatively, in the event no server is available to act as the master entity 74, a vehicle 10 from the vehicle group interacts with the other vehicles in the vehicle group at 612. Each vehicle 10 in the group shares its local/global maps perceived by the vehicle. The global map, once created, is sent at 614 to a server or the like (which is not configured to fuse together collected local maps to create a global map) and may be accessed by other vehicles entering the area of interest. For example, a vehicle may choose at 616 whether or not to reuse the global map and localize itself. The localization uses the information of the descriptors that each point in the map possesses. This localization, using an existing global map, may be performed efficiently via, for example, a Bag of Words algorithm. [0067] In another example embodiment, a single vehicle 10 creates a global map having a global coordinate system through identifying a landmark and using global
coordinates as described above. The global map may then be sent to a master entity 74 and/or server and maintained for use by other vehicles to facilitate the localization thereof. [0068] The map generation block 70 and the map reuse block 72 may be implemented in software stored in non-transitory memory having instructions which, when executed by the controller 25 causes the controller to perform the method described above and illustrated in Fig. 6. The software and/or algorithms embodied in the software includes as inputs: image sequences captured by camera(s) 12; and optionally vehicle information such as vehicle wheel ticks, information provided by an inertial measurement unit (IMU), and steering wheel angles. The software and/or algorithms generate as outputs vehicle localization in a global coordinate system and a sparse 3D map with point descriptors and scale. The software and/or algorithm operates based upon the following assumptions: 1) known intrinsic/extrinsic parameters of the camera(s) 12 located on the vehicle 10; 2) landmarks, e.g., traffic signals or QR codes; the shape and size of the landmark being known by the vehicle that observes it; and 3) each landmark includes information in a database that is accessible to a vehicle 12 which observes the landmark, the information including GPS position of the landmark which does not change over time, and a point cloud of the landmark in which every point cloud point has a visual descriptor (SIFT, SURF, ORB, HOG, etc.) with the point cloud being constructed using world or global scale/coordinates. [0069] One or more vehicles 10 performs the global map build until the global map is available. The master entity 74 may report when the global map is available for access. Once available, a vehicle 10 may opt to reuse the map for its own localization. [0070] It is noted that the more the scene is seen by vehicles 10, the better the global map becomes. If the environment changes, such as the placement of a road sign 14 or other landmark, the global map is updated accordingly. [0071] The vehicle positioning system localizes a group of vehicles in a GPS- denied environment by using landmarks (e.g., traffic signals or QR codes) and at least a monocular camera. [0072] An advantage of the present vehicle positioning system 15 is that the global map is built and reused without the use of more expensive sensors, such as LIDAR sensors.
[0073] Reusing a map having a global reference allows a vehicle 10 with low computational power to benefit from what other vehicles (with greater computational power) create. Vehicles 10 having greater computational power, save some computational cost from reusing an existing global map which will allow the hardware to be used for activities such as performing vehicle safety functions. [0074] The present map build and reuse functions as described above is less costly than other approaches which use, for example, RFID tags which would require other sensors in the vehicle to calculate and maintain relative localization from the last observed RFID tag. [0075] Although an example embodiment has been disclosed, a worker of ordinary skill in this art would recognize that certain modifications would come within the scope of this disclosure. For that reason, the following claims should be studied to determine the scope and content of this disclosure.
Claims
CLAIMS What is claimed is: 1. A vehicle positioning system, comprising: a controller of at least one vehicle configured to receive image data from at least one mono camera disposed on the at least one vehicle, generate a 3D map of an area of interest based on the image data, the 3D map including global coordinates, and provide the 3D map for use by other vehicles for localizing the other vehicles.
2. The vehicle positioning system of claim 1, wherein the controller is further configured to observe a landmark in the image and determine the global coordinates based on information associated with the landmark.
3. The vehicle positioning system of claim of claim 1, wherein the at least one vehicle comprises a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map and a covariance associated therewith, wherein the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and the associated covariances.
4. The vehicle positioning system of claim 1, wherein the at least one vehicle comprises a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map, wherein the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and based on a consensus algorithm of the 3D maps generated by the controller of the plurality of vehicles.
5. The vehicle positioning system of claim 1, wherein the 3D map is a 3D point cloud, wherein each point in the 3D map is referenced to a global coordinate system and has a feature descriptor.
6. The vehicle positioning system of claim 1, wherein the controller is configured to provide the 3D map for use by the other vehicles by sending the 3D map to a static entity, the other vehicles accessing the 3D map via the static entity.
7. The vehicle positioning system of claim 1, wherein the controller is further configured to receive a request to report observations from a master entity and in response to receipt of the request, generate a local map, compare the local map to the 3D map, and send the comparison to the master entity.
8. A vehicle localizing method, comprising: receiving image data from at least one mono camera disposed on the at least one vehicle; generating a 3D map of an area of interest based on the image data, the 3D map including global coordinates; and providing the 3D map for use by other vehicles for localizing the other vehicles.
9. The method of claim 8, further comprising observing a landmark in the image and determining the global coordinates based on information associated with the landmark.
10. The method of claim 8, wherein the at least one vehicle comprises a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map and a covariance associated therewith, wherein the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and the associated covariances.
11. The method of claim 8, wherein the at least one vehicle comprises a plurality of vehicles, the controller of each vehicle of the plurality of vehicles generating a 3D map, wherein the 3D map provided for use by the other vehicles is based on a fusion of the 3D maps generated by the vehicles and based on a consensus algorithm of the 3D maps generated by the controller of the plurality of vehicles.
12. The method of claim 8, wherein the 3D map is a 3D point cloud, wherein each point in the 3D map is referenced to a global coordinate system and has a feature descriptor.
13. The method of claim 8, wherein providing the 3D map for use by the other vehicles comprises sending the 3D map to a static entity, the other vehicles accessing the 3D map via the static entity.
14. The method of claim 8, further comprising receiving a request to report observations from a master entity, and in response to receipt of the request, generate a local map, compare the local map to the 3D map, and send the comparison to the master entity
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163201292P | 2021-04-22 | 2021-04-22 | |
US63/201,292 | 2021-04-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022226529A1 true WO2022226529A1 (en) | 2022-10-27 |
Family
ID=81597831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/071860 WO2022226529A1 (en) | 2021-04-22 | 2022-04-22 | Distributed multi-vehicle localization for gps-denied environments |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022226529A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018204656A1 (en) * | 2017-05-03 | 2018-11-08 | Mobileye Vision Technologies Ltd. | Detection and classification systems and methods for autonomous vehicle navigation |
US20200098135A1 (en) * | 2016-12-09 | 2020-03-26 | Tomtom Global Content B.V. | Method and System for Video-Based Positioning and Mapping |
US20200394410A1 (en) * | 2016-08-29 | 2020-12-17 | Trifo, Inc. | Visual-Inertial Positional Awareness for Autonomous and Non-Autonomous Tracking |
-
2022
- 2022-04-22 WO PCT/US2022/071860 patent/WO2022226529A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200394410A1 (en) * | 2016-08-29 | 2020-12-17 | Trifo, Inc. | Visual-Inertial Positional Awareness for Autonomous and Non-Autonomous Tracking |
US20200098135A1 (en) * | 2016-12-09 | 2020-03-26 | Tomtom Global Content B.V. | Method and System for Video-Based Positioning and Mapping |
WO2018204656A1 (en) * | 2017-05-03 | 2018-11-08 | Mobileye Vision Technologies Ltd. | Detection and classification systems and methods for autonomous vehicle navigation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10788830B2 (en) | Systems and methods for determining a vehicle position | |
Ghallabi et al. | LIDAR-Based road signs detection For Vehicle Localization in an HD Map | |
US8301374B2 (en) | Position estimation for ground vehicle navigation based on landmark identification/yaw rate and perception of landmarks | |
Brenner | Extraction of features from mobile laser scanning data for future driver assistance systems | |
EP3137850B1 (en) | Method and system for determining a position relative to a digital map | |
Atia et al. | A low-cost lane-determination system using GNSS/IMU fusion and HMM-based multistage map matching | |
EP3842751B1 (en) | System and method of generating high-definition map based on camera | |
US20150142248A1 (en) | Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex | |
KR20190087266A (en) | Apparatus and method for updating high definition map for autonomous driving | |
CN111149011B (en) | Method and vehicle system for locating highly automated vehicles (HAFs), in particular highly automated vehicles | |
US20200226932A1 (en) | Vehicle dispatch system, autonomous driving vehicle, and vehicle dispatch method | |
CN113916242A (en) | Lane positioning method and device, storage medium and electronic equipment | |
US11221230B2 (en) | System and method for locating the position of a road object by unsupervised machine learning | |
Sridhar et al. | Cooperative perception in autonomous ground vehicles using a mobile‐robot testbed | |
US20210180958A1 (en) | Graphic information positioning system for recognizing roadside features and method using the same | |
US11579622B2 (en) | Systems and methods for utilizing images to determine the position and orientation of a vehicle | |
CN112461249A (en) | Sensor localization from external source data | |
Gläser et al. | Environment perception for inner-city driver assistance and highly-automated driving | |
CN109997052A (en) | Using cross-point sensor characteristic point with reference to build environment model and the method and system of positioning | |
Choi et al. | In‐Lane Localization and Ego‐Lane Identification Method Based on Highway Lane Endpoints | |
US20240200953A1 (en) | Vision based cooperative vehicle localization system and method for gps-denied environments | |
Yuan et al. | Leveraging dynamic objects for relative localization correction in a connected autonomous vehicle network | |
WO2022226529A1 (en) | Distributed multi-vehicle localization for gps-denied environments | |
EP4113063A1 (en) | Localization of autonomous vehicles using camera, gps, and imu | |
CN117470258A (en) | Map construction method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22722671 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22722671 Country of ref document: EP Kind code of ref document: A1 |