Wikipedia:Reference desk/Archives/Mathematics/2007 June 1
Mathematics desk | ||
---|---|---|
< May 31 | << May | June | Jul >> | June 2 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 1
[edit]quadratic form
[edit]Here is a puzzle. Let us assume I have a series of vectors, ,…, in and real numbers ,…,. Under what circumstances does there exist a quadratic form with for all . Under what circumstances does there exist a positive definite ? Obviously, a sufficient condition is that the are linearly independent (and the ), but this is far from necessary. –Joke 00:36, 1 June 2007 (UTC)
- This looks like a linear algebra problem. To determine what quadratic form would take the value we want, we would plug in the components of each vector into the general quadratic form and solve the resultant system of equations for the coefficients. Let me demonstrate for the case N=2. Say we have three points . The general form of the quadratic form in two variables is . Thus to solve for with the given points, we look at the equation
- ,
- where is the j'th component of . If you have more than three points, then you will still be able to solve the problem if the rank (linear algebra) of the matrix is 3 or less. In the general case, a quadratic form in N variables has terms, so you want the matrix corresponding to the one above to have rank equal or less than this. nadav (talk) 18:59, 1 June 2007 (UTC)
Yes, a simple criterion for the existence of a solution (for generic ) is that the outer products all be linearly independent. But what about positive definiteness? If is positive definite, then the corresponding bilinear form must satisfy the Cauchy-Schwarz inequality. Then if
it must follow that
This condition is necessary, but is it sufficient? Is there a simpler way to express it? –Joke 00:18, 2 June 2007 (UTC)
- Yes, I'm sorry, I missed the most important part of your question. I'll give this more thought. nadav (talk) 01:11, 2 June 2007 (UTC)
further math calculus
[edit]how do i find the 1st,2nd and third derivatives of ln(1-x). also could someone explain to me whether i am correct by saying the domain of a function is the y range and the codomain is the x. i am doing the mei ocr a-level module further pure2 and i am struggling to find an easy method to inverse a 3*3 matrix. My calculator does it but i need to be able to do it by hand too. thank you for your help
- Well, this is kind of a disjointed post, but I think I can interpret your first question. You want to go back to the definition of the logarithm function; that is, . To get the first derivative, you can use the first fundamental theorem of calculus; to get the other two, the quotient and/or chain rules can help you out. –King Bee (τ • γ) 17:00, 1 June 2007 (UTC)
- As for your matrix question, there is never really an "easy" way to invert an arbitrary matrix. However, reading up on adjugate matrix should help you figure out how to invert a 3 x 3 matrix. –King Bee (τ • γ) 17:06, 1 June 2007 (UTC)
- To invert a matrix you can use Gauss–Jordan elimination. To do that, you need to about the elementary row operations. —Bromskloss 17:51, 1 June 2007 (UTC)
- If f is the function in question, and you are referring to the usual conventional roles of x and y as in y = f(x), then by the domain of function f normally the set of x-values is meant for which f(x) is defined. For example, for the conventional (principal) square root function the domain consists of the non-negative real numbers. However, this is true for the square root function on the real numbers, but we can also define it on the complex numbers, or consider only integers, or even only perfect squares as the input range. When defining or identifying a function f really properly, one should identify up-front what kind of mathematical objects A can serve as input, and what kind of mathematical objects B can be output. This is then denoted as:
- f : A → B,
- and A is called the domain, and B the codomain of f. For example, for the conventional square root function we can take
- √ : R+ ∪ {0} → R
- (in which R+ ∪ {0} stands for the positive reals together with 0), and for the principal square root function on the complex numbers
- √ : C → C.
- As this shows, it is in general essential to define the domain, and not just use a definition of the form f(x) = ..., since there may be different functions fitting that definition, with different properties (the first √ function above is continuous, the second is not). Note further that we can equally validly choose
- √ : R+ ∪ {0} → R+ ∪ {0}.
- So the choice of codomain is also not determined by definition of the form f(x) = ..., even if the domain is given. In everyday mathematics the precise choice is usually less critical, and anything encompassing the range of the function will do. --LambiamTalk 19:28, 1 June 2007 (UTC)