Loading AI tools
Differential equation important in physics From Wikipedia, the free encyclopedia
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics.
This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation.
The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions u = u (x, y, z, t) of a time variable t (a variable representing time) and one or more spatial variables x, y, z (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for as the representation of an electric vector field wave in the absence of wave sources, each coordinate axis component (i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions u are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions.
The scalar wave equation is
where
The equation states that, at any given point, the second derivative of with respect to time is proportional to the sum of the second derivatives of with respect to space, with the constant of proportionality being the square of the speed of the wave.
Using notations from vector calculus, the wave equation can be written compactly as or where the double subscript denotes the second-order partial derivative with respect to time, is the Laplace operator and the d'Alembert operator, defined as:
A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed c. This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics.
The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments.
The wave equation in one spatial dimension can be written as follows: This equation is typically described as having only one spatial dimension x, because the only other independent variable is the time t.
The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension.[2]
Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress).
The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass m interconnected with massless springs of length h. The springs have a spring constant of k:
Here the dependent variable u(x) measures the distance from the equilibrium of the mass situated at x, so that u(x) essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass m at the location x + h is:
By equating the latter equation with
the equation of motion for the weight at the location x + h is obtained: If the array of weights consists of N weights spaced evenly over the length L = Nh of total mass M = Nm, and the total spring constant of the array K = k/N, we can write the above equation as
Taking the limit N → ∞, h → 0 and assuming smoothness, one gets which is from the definition of a second derivative. KL2/M is the square of the propagation speed in this particular case.
In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness K given by where A is the cross-sectional area, and E is the Young's modulus of the material. The wave equation becomes
AL is equal to the volume of the bar, and therefore where ρ is the density of the material. The wave equation reduces to
The speed of a stress wave in a bar is therefore .
For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables[3] changes the wave equation into which leads to the general solution
In other words, the solution is the sum of a right-traveling function F and a left-traveling function G. "Traveling" means that the shape of these individual arbitrary functions with respect to x stays constant, however, the functions are translated left and right with time at the speed c. This was derived by Jean le Rond d'Alembert.[4]
Another way to arrive at this result is to factor the wave equation using two first-order differential operators: Then, for our original equation, we can define and find that we must have
This advection equation can be solved by interpreting it as telling us that the directional derivative of v in the (1, -c) direction is 0. This means that the value of v is constant on characteristic lines of the form x + ct = x0, and thus that v must depend only on x + ct, that is, have the form H(x + ct). Then, to solve the first (inhomogenous) equation relating v to u, we can note that its homogenous solution must be a function of the form F(x - ct), by logic similar to the above. Guessing a particular solution of the form G(x + ct), we find that
Expanding out the left side, rearranging terms, then using the change of variables s = x + ct simplifies the equation to
This means we can find a particular solution G of the desired form by integration. Thus, we have again shown that u obeys u(x, t) = F(x - ct) + G(x + ct).[5]
For an initial-value problem, the arbitrary functions F and G can be determined to satisfy initial conditions:
The result is d'Alembert's formula:
In the classical sense, if f(x) ∈ Ck, and g(x) ∈ Ck−1, then u(t, x) ∈ Ck. However, the waveforms F and G may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left.
The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.
Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency ω, so that the temporal part of the wave function takes the form e−iωt = cos(ωt) − i sin(ωt), and the amplitude is a function f(x) of the spatial variable x, giving a separation of variables for the wave function:
This produces an ordinary differential equation for the spatial part f(x):
Therefore, which is precisely an eigenvalue equation for f(x), hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions with wave number k = ω/c.
The total wave function for this eigenmode is then the linear combination where complex numbers A, B depend in general on any initial and boundary conditions of the problem.
Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor so that a full solution can be decomposed into an eigenmode expansion: or in terms of the plane waves, which is exactly in the same form as in the algebraic approach. Functions s±(ω) are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet u(x, t), which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of ω.[6] The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source.
The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. In a homogeneous continuum (cartesian coordinate ) with a constant modulus of elasticity a vectorial, elastic deflection causes the stress tensor . The local equilibrium of a) the tension force due to deflection and b) the inertial force caused by the local acceleration can be written as By merging density and elasticity module the sound velocity results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium:[7] (Note: Instead of vectorial only scalar can be used, i.e. waves are travelling only along the axis, and the scalar wave equation follows as .)
The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term can be seen that there are two waves travelling in opposite directions and are possible, hence results the designation “two-way wave equation”. It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For special two-wave equation with the d'Alembert operator results:[8] For this simplifies to Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction results[9] as
A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions.
To obtain a solution with constant frequencies, apply the Fourier transform which transforms the wave equation into an elliptic partial differential equation of the form:
This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as:[10] The angular part of the solution take the form of spherical harmonics and the radial function satisfies: independent of , with . Substituting transforms the equation into which is the Bessel equation.
Consider the case l = 0. Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., Ψ(r, t) → u(r, t). In this case, the wave equation reduces to[clarification needed] or
This equation can be rewritten as where the quantity ru satisfies the one-dimensional wave equation. Therefore, there are solutions in the form where F and G are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as r increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions.[citation needed]
For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation.
Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency ω, then the transformed function ru(r, t) has simply plane-wave solutions: or
From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude drops at the rate proportional to 1/r2, an example of the inverse-square law.
The wave equation is linear in u and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let φ(ξ, η, ζ) be an arbitrary function of three independent variables, and let the spherical wave form F be a delta function. Let a family of spherical waves have center at (ξ, η, ζ), and let r be the radial distance from that point. Thus
If u is a superposition of such waves with weighting function φ, then the denominator 4πc is a convenience.
From the definition of the delta function, u may also be written as where α, β, and γ are coordinates on the unit sphere S, and ω is the area element on S. This result has the interpretation that u(t, x) is t times the mean value of φ on a sphere of radius ct centered at x:
It follows that
The mean value is an even function of t, and hence if then
These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point P, given (t, x, y, z) depends only on the data on the sphere of radius ct that is intersected by the light cone drawn backwards from P. It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure.[11][12]
In two space dimensions, the wave equation is
We can use the three-dimensional theory to solve this problem if we regard u as a function in three dimensions that is independent of the third dimension. If
then the three-dimensional solution formula becomes
where α and β are the first two coordinates on the unit sphere, and dω is the area element on the sphere. This integral may be rewritten as a double integral over the disc D with center (x, y) and radius ct:
It is apparent that the solution at (t, x, y) depends not only on the data on the light cone where but also on data that are interior to that cone.
We want to find solutions to utt − Δu = 0 for u : Rn × (0, ∞) → R with u(x, 0) = g(x) and ut(x, 0) = h(x).[13]
Assume n ≥ 3 is an odd integer, and g ∈ Cm+1(Rn), h ∈ Cm(Rn) for m = (n + 1)/2. Let γn = 1 × 3 × 5 × ⋯ × (n − 2) and let
Then
Assume n ≥ 2 is an even integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn), for m = (n + 2)/2. Let γn = 2 × 4 × ⋯ × n and let
then
Consider the inhomogeneous wave equation in dimensionsBy rescaling time, we can set wave speed .
Since the wave equation has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity . The effect of inflicting a velocity impulse is to suddenly change the wave displacement .
For acceleration impulse, where is the Dirac delta function. The solution to this case is called the Green's function for the wave equation.
For velocity impulse, , so if we solve the Green function , the solution for this case is just .[citation needed]
The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case.
Given the Green function , and initial conditions , the solution to the homogeneous wave equation is[14]where the asterisk is convolution in space. More explicitly, For the inhomogeneous case, the solution has one additional term by convolution over spacetime:
By a Fourier transform,The term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by or by , because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution.[15] The forward solution givesThe integral can be solved by analytically continuing the Poisson kernel, giving[14][16]where is half the surface area of a -dimensional hypersphere.[16]
We can relate the Green's function in dimensions to the Green's function in dimensions.[17]
Given a function and a solution of a differential equation in dimensions, we can trivially extend it to dimensions by setting the additional dimensions to be constant: Since the Green's function is constructed from and , the Green's function in dimensions integrates to the Green's function in dimensions:
The Green's function in dimensions can be related to the Green's function in dimensions. By spherical symmetry, Integrating in polar coordinates, where in the last equality we made the change of variables . Thus, we obtain the recurrence relation
When , the integrand in the Fourier transform is the sinc function where is the sign function and is the unit step function. One solution is the forward solution, the other is the backward solution.
The dimension can be raised to give the caseand similarly for the backward solution. This can be integrated down by one dimension to give the case
In case, the Green's function solution is the sum of two wavefronts moving in opposite directions.
In odd dimensions, the forward solution is nonzero only at . As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,[17]where , and the wave speed is restored.
In even dimensions, the forward solution is nonzero in , the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:[17]The wavefront itself also involves increasingly higher derivatives of the Dirac delta function.
This means that a general Huygens' principle – the wave displacement at a point in spacetime depends only on the state at points on characteristic rays passing – only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions.[18]: 698
Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients[18]: 765
For an incident wave traveling from one medium (where the wave speed is c1) to another medium (where the wave speed is c2), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary.
Consider the component of the incident wave with an angular frequency of ω, which has the waveform At t = 0, the incident reaches the boundary between the two media at x = 0. Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms The continuity condition at the boundary is This gives the equations and we have the reflectivity and transmissivity When c2 < c1, the reflected wave has a reflection phase change of 180°, since B/A < 0. The energy conservation can be verified by The above discussion holds true for any component, regardless of its angular frequency of ω.
The limiting case of c2 = 0 corresponds to a "fixed end" that does not move, whereas the limiting case of c2 → ∞ corresponds to a "free end".
A flexible string that is stretched between two points x = 0 and x = L satisfies the wave equation for t > 0 and 0 < x < L. On the boundary points, u may satisfy a variety of boundary conditions. A general form that is appropriate for applications is
where a and b are non-negative. The case where u is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective a or b approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form
A consequence is that
The eigenvalue λ must be determined so that there is a non-trivial solution of the boundary-value problem
This is a special case of the general problem of Sturm–Liouville theory. If a and b are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for u and ut can be obtained from expansion of these functions in the appropriate trigonometric series.
The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain D in m-dimensional x space, with boundary B. Then the wave equation is to be satisfied if x is in D, and t > 0. On the boundary of D, the solution u shall satisfy
where n is the unit outward normal to B, and a is a non-negative function defined on B. The case where u vanishes on B is a limiting case for a approaching infinity. The initial conditions are
where f and g are defined in D. This problem may be solved by expanding f and g in the eigenfunctions of the Laplacian in D, which satisfy the boundary conditions. Thus the eigenfunction v satisfies
in D, and
on B.
In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary B. If B is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle θ, multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation.
If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order.
The inhomogeneous wave equation in one dimension is with initial conditions
The function s(x, t) is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism.
One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point (xi, ti), the value of u(xi, ti) depends only on the values of f(xi + cti) and f(xi − cti) and the values of the function g(x) between (xi − cti) and (xi + cti). This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is c, then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time.
In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point (xi, ti) as RC. Suppose we integrate the inhomogeneous wave equation over this region:
To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:
The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute:
In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus dt = 0.
For the other two sides of the region, it is worth noting that x ± ct is a constant, namely xi ± cti, where the sign is chosen appropriately. Using this, we can get the relation dx ± cdt = 0, again choosing the right sign:
And similarly for the final boundary segment:
Adding the three results together and putting them back in the original integral gives
Solving for u(xi, ti), we arrive at
In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices (xi, ti) compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source.
The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: where:
By using ∇ × (∇ × u) = ∇(∇ ⋅ u) − ∇ ⋅ ∇ u = ∇(∇ ⋅ u) − ∆u, the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation.
Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if f and ∇ ⋅ u are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field E, which has only transverse waves.
In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation
where ω is the angular frequency, and k is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is ω = ±c |k|, but in general, the constant speed c gets replaced by a variable phase velocity:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.