CN100417231C - Three-dimensional vision semi-matter simulating system and method - Google Patents
Three-dimensional vision semi-matter simulating system and method Download PDFInfo
- Publication number
- CN100417231C CN100417231C CNB2006100836377A CN200610083637A CN100417231C CN 100417231 C CN100417231 C CN 100417231C CN B2006100836377 A CNB2006100836377 A CN B2006100836377A CN 200610083637 A CN200610083637 A CN 200610083637A CN 100417231 C CN100417231 C CN 100417231C
- Authority
- CN
- China
- Prior art keywords
- virtual
- video camera
- parameter
- dimensional
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The system comprises: a projector, a video camera, a cradle head of video camera, a projection screen and a computer. When using the simulation system to make simulation, in the first, the virtual view software in the computer generates a virtual 3D image to get multi virtual stereovision 3D images from more than two viewpoints at same time. The projector projects each 3D image to the screen; the real video camera respectively captures the 3D images from different viewpoints on the screen to get the output images of the simulation system at the measurement time; calculating the parameters of the virtual video camera, the parameters of imaging model of real video camera and the parameters of imaging model of projector; finally, based on the basis principle of stereovision, getting the virtual 3D scene and 3D space coordinates in the image.
Description
Technical field
The present invention relates to a kind of three-dimensional vision semi-matter simulating system and method.
Background technology
Stereoscopic vision is the three-dimensional structure of determining scene according to two width of cloth of different points of view shooting or multiple image, and its theoretical foundation of studying is that the single three-dimensional position in the real world projects to unique a pair of image position.Three-dimensional scenic can only have been lost a large amount of information with the form record of two dimension different with general camera chain, stereoscopic vision can obtain the scene three-dimensional data, obtain application more and more widely in robot vision, aerial mapping, Military Application, medical diagnosis and industrial detection, domestic and international numerous researchers are also more and more deep to its research.At present, stereoscopic vision emulation mainly all is pure digi-tal emulation, and Digital Simulation can provide the research and development platform for stereoscopic vision, reduces cost greatly.But because complication system modeling difficulty, the pure digi-tal analogue system has to complication system is simplified, therefore the pure digi-tal analogue system exist simulated environment too idealized, have the shortcoming of certain difference with the actual photographed environment, particularly exist under the situation of distortion and noise jamming error factors at video camera, there is deficiency in the image effect that obtains by Digital Simulation aspect the authenticity.Semi-true object emulation technology is called hardware again at loop simulation, inserts material object under the situation of conditions permit as far as possible in analogue system, to replace the Mathematical Modeling of appropriate section, so more near actual conditions, thereby obtains more definite information.Semi-true object emulation technology can better overcome the shortcoming of pure digi-tal emulation, embodies advantage in the initial stage of stereoscopic vision research.Because hardware-in-the-loop simulation will combine soft, hardware and constructing system, and how to set up system model between soft, the hardware, determining that there is big difficulty in the relation between digital scene and the actual camera image, is the bottleneck of three-dimensional vision semi-matter simulating system and method.Do not mention the problem that a kind of semi-matter simulating system can solve the complex situations emulation difficulty that is run in the stereoscopic vision research process in the available data.Therefore how to solve the bottleneck problem of three-dimensional vision semi-matter simulated environment, the effectively practical semi-matter simulating system of design is very necessary and urgent.
Summary of the invention
The objective of the invention is to overcome the defective of prior art, a kind of three-dimensional vision semi-matter simulating system and method are provided, it is based on principle of stereoscopic vision, Combining with technology of virtual reality, set up the hardware-in-the-loop simulation sensor construction soft, that hardware combines by the projective transformation of connection level, have precision height, simple in structure and advantage that cost is low.
The technical solution adopted in the present invention is: by the virtual views technology, obtain stereoscopic vision virtual three-dimensional visual pattern (two width of cloth or several) according to the virtual three-dimensional scene that produces in the different two or more viewpoints of synchronization, respectively each virtual three-dimensional visual pattern is projected on the screen as media by projecting apparatus, by each virtual three-dimensional visual pattern of different points of view on the actual camera difference photographed screen, and obtain analogue system at the constantly final output image (two width of cloth or several) of this measurement, the three dimensional space coordinate that utilizes the parameter of analogue system camera review and system can obtain virtual three-dimensional scene in the image according to the first principles computations of stereoscopic vision is finished the 3 D stereo of centering virtual three-dimensional scene and is measured, thereby realizes checking and assessment to the stereoscopic vision algorithm.Here, described virtual views technology is by softwares such as OpenGL virtual environment to be made up, and generates the corresponding 3D vision image of scene that photographs with optical instrument (video camera).The basic principle of described stereoscopic vision is to observe same scenery from two or more viewpoints, to obtain the perceptual image under different points of view, based on principle of parallax, recovers the three-dimensional geometric information of scenery.
The present invention has realized the foundation of the three-dimensional vision semi-matter emulation platform soft, that hardware combines, its advantage mainly contains: (1) successfully combines virtual reality technology and camera sensor effectively, have Digital Simulation System simultaneously and realize that simple, simple operation and hardware simulation system are near the true high advantage of experimental enviroment degree, filled up the effectively blank of practical three-dimensional vision semi-matter simulated environment of current nothing, for the research of stereoscopic vision provides effective emulation platform; (2) this system configuration is simple, cost is low, and critical piece has only the logical projecting apparatus of a Daepori and video camera and computer, and system is easy to realize applied range; (3) this system has higher precision, carries out the stereo vision three-dimensional rebuilding emulation experiment by system, and its precision is about 1/1000.
Description of drawings
With embodiment three-dimensional vision semi-matter simulating system of the present invention and method are done explanation in further detail with reference to the accompanying drawings.
Fig. 1 is stereoscopic vision measuring principle figure of the present invention;
Fig. 2 is a system configuration schematic diagram of the present invention;
Fig. 3 is a system model schematic diagram of the present invention;
Fig. 4 is a method flow diagram of the present invention;
Fig. 5 is that the perspective projection angle of visual field of the present invention and scale factor concern schematic diagram;
Fig. 6 is a virtual video camera initial condition schematic diagram of the present invention.
Embodiment
Machine C
2Corresponding picture planar I
2Epigraph point P
2On the line of its photocentre, therefore by two video cameras just can unique definite spatial point P three-dimensional position.
Three-dimensional vision semi-matter simulating system shown in Figure 2 comprises that projecting apparatus 1, video camera 2, camera pan-tilt 3, projection screen 4, computer 5 and 6 constitute, each several part all is fixed on indoor wall or the center rest, the VPL-CX71 pocket projector that described projecting apparatus 1 selects for use Sony company to produce, be connected with computer 5, launch the virtual three-dimensional visual pattern that virtual views software generates in the computer 5 on projection screen 4, the system in use parameter of projecting apparatus 1 must immobilize.Video camera 2 is selected common CCD camera for use, the camera lens of video camera 2 is 12mm, resolution is 768 * 576, it is fixed on the camera pan-tilt 3, by image pick-up card the view data of gathering is sent to computer 6, and the control by 6 pairs of camera pan-tilts 3 of computer realizes that the visual field of video camera 2 chooses, behind the ideal position of determining video camera 2, begin to carry out the demarcation of system parameters, camera pan-tilt 3 can not rotate in the process that system uses, the camera lens of video camera 2 to choose that visual field with video camera 2 comprises the image of projecting apparatus 1 as far as possible and don't exceed be principle, for reaching high accuracy, projecting apparatus 1 in the system, video camera 2, camera pan-tilt 3 and projection screen 4 must be fixed on indoor wall or the support, and relative variation can not appear in the position between them.
As shown in Figure 3, three-dimensional vision semi-matter simulating system of the present invention has been set up the model of three projective transformations: virtual video camera model 7, projecting apparatus imaging model 8 and video camera imaging model 9, the relation that has input and output between these three models, wherein, virtual video camera model 7 is realized by computer virtual what comes into a driver's software, obtain virtual video camera imaging results under the different condition by the parameter of artificial setting, its camera model is linear, and the virtual video camera parameter can be obtained by the set calculation of parameter of virtual views software; And projecting apparatus imaging model 8 and video camera imaging model 9 are actual hardware environment, are the projective transformation of " face-face " between them, and camera model is non-linear, and its parameter needs to demarcate.
As shown in Figure 4, the present invention mainly may further comprise the steps when carrying out three-dimensional vision semi-matter emulation:
The first step, virtual views software by computer generates the virtual three-dimensional scene image, and obtain stereoscopic vision virtual three-dimensional visual pattern in the different two or more viewpoints of synchronization, in this step, must in virtual views software, import following parameter: the following parameter of input in virtual vision software at first: the vertical field of view angle θ of perspective projection transformation, the length-width ratio Aspect of visual field, viewpoint to nearly cutting plane apart from NearPlane, viewpoint arrive yonder clipping plane apart from FarPlane; The locus of viewpoint, i.e. D coordinates value XPos under the world coordinate system, YPos, ZPos; Direction of visual lines driftage angle θ
Yaw, luffing angle θ
PitchWith lift-over angle θ
Roll
In second step, respectively each virtual three-dimensional visual pattern is projected on the screen by projecting apparatus;
In the 3rd step,, and obtain analogue system at this measurement final output image constantly by each virtual three-dimensional visual pattern of different points of view on the actual camera difference photographed screen;
In the 4th step, calculate the virtual video camera parameter, demarcate actual camera imaging model parameter and projecting apparatus imaging model parameter;
The 5th goes on foot, and obtains the three dimensional space coordinate of virtual three-dimensional scene in the image according to the first principles computations of stereoscopic vision.
Below, set up the virtual video camera model:
[u wherein
v, v
v, 1]
TBe the homogeneous coordinates of virtual three-dimensional visual pattern, [X
Vw, Y
Vw, Z
Vw, 1]
TBe the three dimensions homogeneous coordinates of virtual scene, s
vBe scale factor, α
VxAnd α
VyBe respectively virtual video camera x axle and y direction of principal axis scale factor, u
V0And v
V0Be virtual video camera picture centre, R
vAnd T
vBe spin matrix and translation vector, i.e. the virtual video camera external parameter.M
1vAnd M
2vBe respectively the inner parameter matrix and the external parameter matrix of virtual video camera, M
vProjection matrix for virtual video camera.Virtual video camera parameter M
vEach element can be obtained by the set calculation of parameter of virtual views software.
Set up the projecting apparatus imaging model:
Parameter-definition because the projecting apparatus imaging is the projective transformation of " face-face ", therefore can be established Z as before in the above formula
Pw=0, then can get by following formula:
M wherein
p=[u
pv
p1]
T, r
1p, r
2pBe respectively R
pThe 1st, 2 row,
Set up the actual camera imaging model:
In the formula parameter-definition as before, therefore video camera imaging also is the projective transformation of " face-face " here, also can establish Z
Cw=0, then can get by following formula:
M=[u wherein
cv
c1]
T, r
1c, r
2cBe respectively R
cThe 1st, 2 row,
Under the actual conditions, video camera is not desirable perspective imaging, but has distortion in various degree, only considers single order radial distortion k here, is that this Model parameter is demarcated and conversion Calculation below.
Calculate the parameter of virtual video camera below, setting the virtual video camera image resolution ratio here is 1024 * 768.
(1) inner parameter
Can try to achieve the inner parameter matrix of virtual video camera by the resolution of the angle of visual field and image.As shown in Figure 5, O is a viewpoint, and plane P is an imaging surface, and the rate respectively of imaging surface is w * h, and the pixel physical size is dx, dy, and the axial scale factor of u axle and v is α
x, α
y, the focal length of virtual video camera is f, the vertical field of view angle is θ
Then can get by the geometrical relationship among Fig. 5:
Can get by following formula:
Virtual video camera is desirable pin-hole imaging model, and its u axle and the axial scale factor of v equate, be made as α
x=α
y=α, camera optical axis is picture planar central, i.e. picture centre with the picture plane point of intersection.Promptly determine virtual video camera inner parameter matrix thus.
(2) external parameter
External parameter is R, and finding the solution by coordinate transform of T realizes.By the position of viewpoint in world coordinate system, and the driftage angle of direction of visual lines, luffing angle and lift-over angle can realize R, and T finds the solution:
The initial condition (viewpoint at virtual world coordinate origin, driftage angle, luffing angle and lift-over angle all be 0) of virtual video camera in the virtual world coordinate system, as shown in Figure 6: the virtual world coordinate is X, Y, Z is shown in the black coordinate system.Virtual video camera coordinate system initial condition is X
C0, Y
C0, Z
C0, promptly direction of visual lines points to the X-direction of virtual world coordinate system, and then the virtual video camera coordinate system is from overlapping the spin matrix R of evolution initial position to Fig. 6 with the virtual world coordinate system
V1For:
Virtual video camera is rotated according to go off course the again order of last pitching of first lift-over under initial condition, and determines the postrotational final position of virtual video camera, and its spin matrix is R
V2, again the virtual video camera viewpoint is displaced to set position and has promptly finished the conversion that is tied to the virtual video camera coordinate system from the virtual world coordinate.
Rotation of coordinate matrix R
V2Can try to achieve with the Eulerian angles representation, according to the virtual video camera rotation of front defined order as can be known in the Eulerian angles representation rotation be in proper order: around Z axle → around Y-axis → around X-axis, the anglec of rotation is positive and negative to be determined by the right-handed coordinate system rule, angle around X-axis, Y-axis, Z axle is respectively α, β, behind the γ, can derive and try to achieve spin matrix and be:
α=θ wherein
Pitch, β=θ
Yaw, γ=θ
Roll
Spin matrix R between virtual world coordinate system and the virtual video camera coordinate system thus
v=R
V2* R
V1
If the coordinate figure of virtual video camera viewpoint in the virtual world coordinate system is T
VW, then translation vector is:
T
v=-(R
v*T
vW)
Finally obtaining virtual video camera external parameter matrix is:
Introduce the demarcation of projecting apparatus imaging model and video camera imaging model parameter below:
Finally obtaining virtual video camera external parameter matrix is:
Introduce the demarcation of projecting apparatus imaging model and video camera imaging model parameter below:
In actual applications, we do not need respectively to determine the parameter of projecting apparatus imaging model and video camera imaging model among Fig. 3, promptly need not to find the solution respectively projector image to projection screen, projection screen is to the transformation relation between the video camera, and only need determine that the transformation relation between video camera and the projector image gets final product, therefore video camera and projecting apparatus can be done as a whole the demarcation, because camera lens focal length little (12mm), there is bigger radial distortion in video camera, so system need adopt the non linear system scaling method to demarcate, the demarcating steps of its parameter is as follows:
At first, the varifocal method that proposes according to Lenz and Tsai is (referring to Lenz.R.K, Tsai.R.Y, Techniques forCalibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology, IEEE Transactions on Pattern Analysis and Machine Intelligence.Volume 10.Issue 5.September1988.Pages:713-720) can try to achieve the picture centre of video camera.
Then, with the projecting apparatus model simply as linear model, then projector computer image, projecting apparatus screen image and camera coordinate system have the double ratio invariance as between 4 of the space conllinear in the plane, utilize (writing of propositions such as Zhang Guangjun referring to Zhang Guangjun based on the constant distortion of camera coefficient scaling method of double ratio, " machine vision ", Beijing: Science Press, 2005.) can realize demarcation to the video camera coefficient of radial distortion.
At last, on the basis of the picture centre of obtaining video camera and coefficient of radial distortion, can be by the pixel image coordinate (u of target
c, v
c) calculate its camera coordinate system hypograph coordinate (X, Y).And camera review (X is Y) to projector computer image (u
p, v
p) be a linear transformation relation, can represent by following formula:
Wherein H is 3 * 3 matrixes, can set up linear equation by (1) formula by a plurality of angle points on the target, utilizes least square method can obtain matrix H, promptly finishes the demarcation to video camera imaging model and projecting apparatus imaging model.
The advantage of solid of the present invention half visual simulation in kind is described below by a specific embodiment:
At first, by the plane target drone image of the computer generation that is connected with projecting apparatus and its image resolution ratio of target image that video camera obtains is 1024 * 768, the target image resolution ratio that video camera is taken is 768 * 576, and is as shown in table 1 below by the result who demarcates with epigraph:
Table 1 system calibrating result
Wherein calibrated error is to utilize calibration result, the mean-square value of difference between projector computer image corresponding angles point coordinates value that is obtained by the angular coordinate inverse that has neither part nor lot in calibrated and calculated in the target camera review and the real projector computer image angle point coordinates value.
Then, by the rectangular pyramid body of one group of known spatial coordinate of virtual views software design, the corresponding parameter of two viewpoints is set, obtains the virtual three-dimensional visual pattern of two width of cloth different points of view, the parameter of two viewpoints is provided with as shown in table 2 respectively:
The parameter setting of two viewpoints of table 2 virtual views software
Can obtain left and right sides virtual video camera parameter by above parameter, as shown in table 3:
The virtual left and right cameras parameter of table 3
Then, two width of cloth images with the virtual views different points of view project on the screen left and right cameras image that shooting obtains through video camera respectively.
Choose among the figure rectangular pyramid summit and part bottom surface angle point as stereo vision three-dimensional coordinate Calculation object, using the stereoscopic vision method by above calibration result and left and right sides virtual video camera parameter, to calculate the three-dimensional coordinate of each point as shown in table 4:
Table 4 test point three-dimensional coordinate result
At last, carry out precision analysis, the mean square error that can be calculated three-dimensional coordinate by last table data is:
X value: 3.189168
Y value: 7.296316
Z value: 3.576814
When in virtual scene, viewpoint is located at the rectangular pyramid body above height above sea level be position, 3000 meters left and right sides, carry out phase
Owing to the stereoscopic vision certainty of measurement can reduce greatly along with measuring range becomes, therefore we weigh the precision that stereo visual system is measured by the three-dimensional coordinate error amount of scene and the ratio of measuring distance, by above experimental data as can be known, this system carries out the stereo vision three-dimensional rebuilding precision about 1/1000 by system.
Above-described only is preferred implementation of the present invention.Should be pointed out that for the person of ordinary skill of the art under the prerequisite that does not break away from the principle of the invention, can also make some modification and improvement, these also should be considered as belonging to protection scope of the present invention.
Claims (5)
1. three-dimensional vision semi-matter simulating system, it is characterized in that: comprise projecting apparatus, video camera, camera pan-tilt, projection screen and two computers, above each several part all is fixed on indoor wall or the center rest, relative variation can not appear in the position between them, described projecting apparatus is connected with a computer, on projection screen, launch the virtual three-dimensional visual pattern that virtual views software generates in this computer, described video camera is fixed on the camera pan-tilt, by image pick-up card the view data of gathering is sent to another computer, and by this computer the control of camera pan-tilt is realized that the visual field of video camera chooses, the camera lens of video camera to choose the image that visual field with video camera comprises projecting apparatus be principle.
2. a method of utilizing the described three-dimensional vision semi-matter simulating system of claim 1 to realize hardware-in-the-loop simulation is characterized in that: comprise the steps:
The first step, by the virtual views software generation virtual three-dimensional scene image of computer, and acquisition is at the stereoscopic vision virtual three-dimensional visual pattern of the different two or more viewpoints of synchronization;
In second step, respectively each virtual three-dimensional visual pattern is projected on the screen by projecting apparatus;
In the 3rd step,, and obtain analogue system at this measurement output image constantly by each virtual three-dimensional visual pattern of different points of view on the actual camera difference photographed screen;
In the 4th step, calculate the virtual video camera parameter, demarcate actual camera imaging model parameter and projecting apparatus imaging model parameter;
The 5th goes on foot, and calculates the three dimensional space coordinate of virtual three-dimensional scene in the image according to the triangulation principle of stereoscopic vision.
3. three-dimensional vision semi-matter emulation mode as claimed in claim 2 is characterized in that: the described first step is included in needs to import following parameter in the virtual vision software of computer: the vertical field of view angle θ of perspective projection transformation, the length-width ratio Aspect of visual field, viewpoint to nearly cutting plane apart from NearPlane, viewpoint to yonder clipping plane apart from FarPlane; The locus of viewpoint, i.e. D coordinates value XPos under the world coordinate system, YPos, ZPos; Direction of visual lines driftage angle θ
Yaw, luffing angle θ
PitchWith lift-over angle θ
Roll
4. three-dimensional vision semi-matter emulation mode as claimed in claim 2 is characterized in that: described the 4th step is to calculate the virtual video camera parameter by the set parameter of the virtual views software of computer, and its imaging model is
[u wherein
v, v
v, 1]
TBe the homogeneous coordinates of virtual three-dimensional visual pattern, [X
Vw, Y
Vw, Z
Vw, 1]
TBe the three dimensions homogeneous coordinates of virtual scene, s
vBe scale factor, α
VxAnd α
VyBe respectively virtual video camera x axle and y direction of principal axis scale factor, u
V0And v
V0Be virtual video camera picture centre, R
vAnd T
vBe spin matrix and translation vector, i.e. the virtual video camera external parameter.M
1vAnd M
2vBe respectively the inner parameter matrix and the external parameter matrix of virtual video camera, M
vBe the projection matrix of virtual video camera, virtual video camera parameter M
vEach element can be obtained by the set calculation of parameter of virtual views software.
5. three-dimensional vision semi-matter emulation mode as claimed in claim 2, it is characterized in that: the video camera imaging model parameter and the projecting apparatus imaging model parameter of the demarcation reality in described the 4th step are done as a whole carrying out, and specifically are divided into following steps:
At first, try to achieve the picture centre of video camera according to the varifocal method;
Then, the projecting apparatus model simply as linear model, is utilized the demarcation of the constant distortion of camera coefficient scaling method realization of double ratio to the video camera coefficient of radial distortion;
Then, on the basis of the picture centre of obtaining video camera and coefficient of radial distortion, obtain it at camera coordinate system hypograph coordinate by the pixel image coordinate Calculation of target;
At last, according to the linear transformation relation of described camera review, realize demarcation to video camera imaging model and projecting apparatus imaging model to the projector computer image.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100836377A CN100417231C (en) | 2006-05-31 | 2006-05-31 | Three-dimensional vision semi-matter simulating system and method |
US11/561,696 US7768527B2 (en) | 2006-05-31 | 2006-11-20 | Hardware-in-the-loop simulation system and method for computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100836377A CN100417231C (en) | 2006-05-31 | 2006-05-31 | Three-dimensional vision semi-matter simulating system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1897715A CN1897715A (en) | 2007-01-17 |
CN100417231C true CN100417231C (en) | 2008-09-03 |
Family
ID=37610052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100836377A Expired - Fee Related CN100417231C (en) | 2006-05-31 | 2006-05-31 | Three-dimensional vision semi-matter simulating system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100417231C (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742348A (en) * | 2010-01-04 | 2010-06-16 | 中国电信股份有限公司 | Rendering method and system |
CN102945637A (en) * | 2012-11-29 | 2013-02-27 | 河海大学 | Augmented reality based embedded teaching model and method |
CN104376560B (en) * | 2014-11-17 | 2024-04-09 | 国家电网公司 | Multifunctional camera calibration method and device based on optical projector |
CN104836953B (en) * | 2015-03-04 | 2019-01-15 | 深圳市祈锦通信技术有限公司 | Multi-projector screen characteristics point automatic camera and denoising recognition methods |
CN104715486B (en) * | 2015-03-25 | 2017-12-19 | 北京经纬恒润科技有限公司 | One kind emulation stand camera marking method and real-time machine |
CN105072433B (en) * | 2015-08-21 | 2017-03-22 | 山东师范大学 | Depth perception mapping method applied to head track virtual reality system |
CN109242752B (en) * | 2018-08-21 | 2020-08-21 | 吉林大学 | Method for acquiring moving image through analog acquisition and application |
CN109765798B (en) * | 2018-12-21 | 2021-09-28 | 北京电影学院 | Semi-physical simulation system for film and television photography |
CN109799073B (en) * | 2019-02-13 | 2021-10-22 | 京东方科技集团股份有限公司 | Optical distortion measuring device and method, image processing system, electronic equipment and display equipment |
CN111766951B (en) * | 2020-09-01 | 2021-02-02 | 北京七维视觉科技有限公司 | Image display method and apparatus, computer system, and computer-readable storage medium |
CN112750167B (en) * | 2020-12-30 | 2022-11-04 | 燕山大学 | Robot vision positioning simulation method and device based on virtual reality |
CN113992906B (en) * | 2021-09-22 | 2024-04-05 | 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) | Multi-channel synchronous simulation method of CAVE system based on Unity3D |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1302999A (en) * | 2001-02-23 | 2001-07-11 | 清华大学 | Method for reconstructing 3D contour of digital projection based on phase-shifting method |
CN1412524A (en) * | 2002-11-28 | 2003-04-23 | 武汉大学 | Method for measuring formation of seamless space stereomodel |
CN1482491A (en) * | 2002-09-15 | 2004-03-17 | 深圳市泛友科技有限公司 | Three-dimensional photographic technology |
US20040125205A1 (en) * | 2002-12-05 | 2004-07-01 | Geng Z. Jason | System and a method for high speed three-dimensional imaging |
CN1551974A (en) * | 2001-08-06 | 2004-12-01 | �ϰ���ѧ��ҵ����˾ | Three dimensional imaging by projecting interference fringes and evaluating absolute phase mapping |
CN1577050A (en) * | 2003-07-11 | 2005-02-09 | 精工爱普生株式会社 | Image processing system, projector,and image processing method |
CN1716313A (en) * | 2004-07-02 | 2006-01-04 | 四川华控图形科技有限公司 | Correcting method for curve projection geometry of artificial site |
-
2006
- 2006-05-31 CN CNB2006100836377A patent/CN100417231C/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1302999A (en) * | 2001-02-23 | 2001-07-11 | 清华大学 | Method for reconstructing 3D contour of digital projection based on phase-shifting method |
CN1551974A (en) * | 2001-08-06 | 2004-12-01 | �ϰ���ѧ��ҵ����˾ | Three dimensional imaging by projecting interference fringes and evaluating absolute phase mapping |
CN1482491A (en) * | 2002-09-15 | 2004-03-17 | 深圳市泛友科技有限公司 | Three-dimensional photographic technology |
CN1412524A (en) * | 2002-11-28 | 2003-04-23 | 武汉大学 | Method for measuring formation of seamless space stereomodel |
US20040125205A1 (en) * | 2002-12-05 | 2004-07-01 | Geng Z. Jason | System and a method for high speed three-dimensional imaging |
CN1577050A (en) * | 2003-07-11 | 2005-02-09 | 精工爱普生株式会社 | Image processing system, projector,and image processing method |
CN1716313A (en) * | 2004-07-02 | 2006-01-04 | 四川华控图形科技有限公司 | Correcting method for curve projection geometry of artificial site |
Also Published As
Publication number | Publication date |
---|---|
CN1897715A (en) | 2007-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100417231C (en) | Three-dimensional vision semi-matter simulating system and method | |
CN110296691B (en) | IMU calibration-fused binocular stereo vision measurement method and system | |
Zhang et al. | A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection | |
CN104330074B (en) | Intelligent surveying and mapping platform and realizing method thereof | |
CA2395257C (en) | Any aspect passive volumetric image processing method | |
CN108288292A (en) | A kind of three-dimensional rebuilding method, device and equipment | |
CN108717712A (en) | A kind of vision inertial navigation SLAM methods assumed based on ground level | |
CN102692214B (en) | Narrow space binocular vision measuring and positioning device and method | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
CN108257183A (en) | A kind of camera lens axis calibrating method and device | |
CN110337674A (en) | Three-dimensional rebuilding method, device, equipment and storage medium | |
CN104034305B (en) | A kind of monocular vision is the method for location in real time | |
KR20130138247A (en) | Rapid 3d modeling | |
EP2022007A2 (en) | System and architecture for automatic image registration | |
CN106127745A (en) | The combined calibrating method of structure light 3 D visual system and line-scan digital camera and device | |
CN109559349A (en) | A kind of method and apparatus for calibration | |
US20230351625A1 (en) | A method for measuring the topography of an environment | |
CN109141226A (en) | The spatial point coordinate measuring method of one camera multi-angle | |
CN113205603A (en) | Three-dimensional point cloud splicing reconstruction method based on rotating platform | |
CN106920276A (en) | A kind of three-dimensional rebuilding method and system | |
Mahdy et al. | Projector calibration using passive stereo and triangulation | |
CN1878318A (en) | Three-dimensional small-sized scene rebuilding method based on dual-camera and its device | |
CN110049304A (en) | A kind of method and device thereof of the instantaneous three-dimensional imaging of sparse camera array | |
CN108279677A (en) | Track machine people's detection method based on binocular vision sensor | |
CN113034571B (en) | Object three-dimensional size measuring method based on vision-inertia |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080903 Termination date: 20200531 |