181x Filetype PDF File size 0.16 MB Source: www.math.ucla.edu
3 Linear Transformations of the Plane Now that we’re using matrices to represent linear transformations, we’ll find ourselves en- countering a wide range of transformations and matrices; it can become difficult to keep track of which transformations do what. In these notes we’ll develop a tool box of basic transformations which can easily be remembered by their geometric properties. 2 2 We’ll focus on linear transformations T : R → R of the plane to itself, and thus on the 2 ×2 matrices A corresponding to these transformation. Perhaps the most important fact to keep in mind as we determine the matrices corresponding to different transformations is that the first and second columns of A are given by T(e1) and T(e2), respectively, where e1 2 and e2 are the standard unit vectors in R . 3.1 Scaling 2 The first transformation of R that we want to consider is that of scaling every vector by 2 some factor k. That is, T(x) = kx for every x ∈ R . If k = 1, then T does nothing. In this case, T(e ) = e and T(e ) = e , so the columns of the corresponding matrix A are e and 1 1 2 2 1 e2: A= 1 0 . 0 1 We call this the identity matrix (of size 2) and denote it either as I or as I when the 2 size is obvious. For any other scale factor k we have T(e1) = k and T(e2) = 0, 0 k so the corresponding matrix is given by A= k 0 . 0 k Now suppose we have two scaling maps: T which scales by a factor of k , and T which 1 1 2 scales by a factor of k . Then T ◦T is the transformation that scales by a factor of k and 2 2 1 1 then by k , which is to say that it scales by a factor of k k . This means that its matrix 2 2 1 representation should be given by A= k2k1 0 , 0 k k 2 1 and indeed we can easily check that k 0 k 0 k k 0 A A = 2 1 = 2 1 =A. 2 1 0 k 0 k 0 k k 2 1 2 1 This agrees with our notion of matrix multiplication representing composition of linear trans- formations. 15 Figure 2: Orthogonal projection of v onto w. 3.2 Orthogonal Projection The next linear transformation we’d like to consider is that of projecting vectors onto a line 2 in R . First we have to consider what it means to project one vector onto another. Take a look at Figure 2, where we’re projecting the vector v onto w orthogonally. What we mean by orthogonal projection is that the displacement vector projwv − v is orthogonal to the vector w, as seen in Figure 2. Wecanseethatproj v will be a scalar multiple of w, so let’s write proj v = kw. Since w w we require that proj v −v be orthogonal to w, we have w (kw−v)·w=0. That is, kw·w−v·w=0, so v·w k = w·w. This means that we have v·w projwv = w·w w. Nownotice that if we project v onto any vector which is a nonzero scalar multiple of w, the resulting vector will be the same as proj v. So really we’re projecting v onto the line L w determined by w. For this reason, we write projLv for the projection of v onto the line L. Given a line L, we can compute proj v by first selecting a unit vector u = hu ,u i L 1 2 through which L passes and then projecting v onto u. We then have e ·u u 2 1 1 u proj (e ) = proj (e ) = u= u= 1 L 1 u 1 u·u 1 u u 1 2 and e ·u u u u proj (e ) = proj (e ) = 2 u= 2u= 1 2 , L 2 u 2 2 u·u 1 u 2 16 Figure 3: Rotation by θ. so the matrix A corresponding to the projection onto L is 2 u u u A= 1 1 2 . u u u2 1 2 2 As before, there’s a matrix product that’s worth considering here. If we apply the transfor- mation of projecting onto L twice, this should be no different than applying it once. After 2 the first projection, all the vectors in R have been mapped onto L, and projecting a vector on L onto L does nothing. This is borne out by the fact that u2 u u u2 u u u2 u u A2 = 1 1 2 1 1 2 = 1 1 2 =A. 2 2 2 u u u u u u u u u 1 2 2 1 2 2 1 2 2 We can also verify our claim that projecting a vector which already lies on L onto L does nothing: 3 2 2 2 ku ku +ku u ku (u +u ) ku A 1 = 1 1 2 = 1 1 2 = 1 , 2 3 2 2 ku ku u +ku ku (u +u ) ku 2 1 2 2 2 1 2 2 where we using the fact that any vector on L has the form hku ,ku i for some k. 1 2 3.3 Rotation Nextwe’ll consider rotating the plane through some angle θ, as depicted in Figure 3. Because the vector e1 lies on the unit circle, so does T(e1), and T(e1) makes an angle of θ with the x-axis. As a result, its x- and y-components are cosθ and sinθ, respectively: T(e ) = cosθ . 1 sinθ At the same time, since e2 makes an angle of π/2 with e1, the vectors T(e2) and T(e1) should also have an angle of π/2 between them. So the x- and y-components of T(e2) are 17 Figure 4: Reflection across a line. cos(θ +π/2) and sin(θ +π/2), respectively: T(e2) = cos(θ +π/2) = −sinθ . sin(θ + π/2) cosθ So the matrix corresponding to rotation by θ is A= cosθ −sinθ . sinθ cosθ If we let A and A be the matrices corresponding to rotation through an angle θ and an θ α angle α, respectively, then one can compute (using angle-addition identities) that A A = cos(θ+α) −sin(θ+α) , θ α sin(θ + α) cos(θ +α) which is the matrix corresponding to rotation through the angle θ + α. 3.4 Reflection Next we’ll consider the linear transformation that reflects vectors across a line L that makes an angle θ with the x-axis, as seen in Figure 4. Computing T(e1) isn’t that bad: since L makes an angle θ with the x-axis, T(e1) should make an angle θ with L, and thus an angle 2θ with the x-axis. So T(e ) = cos2θ . 1 sin2θ Determining T(e2) is only slightly less straightforward. First, e2 makes an angle of π/2 − θ with L, so T(e2) should make the same angle with L, but on the other side of the line. Since there’s an angle of θ between the x-axis and L, the angle between T(e2) and the x-axis is π/2 − 2θ. (We’re assuming that 0 ≤ θ ≤ π/2, but similar computations can be done for 18
no reviews yet
Please Login to review.