The basic core of differential geometry entails the application of differential calculus to what we call parametric curves, e.g. y= x(t). Consider the curve in 3-space (x,y,z) below:

Here the position of P becomes a function if the arc length s traverses from P

_{ 0}to P on the curve. Then the vector:

**R** = **i**x + **j **y + **k **z

is also a function of s. Of particular interest will be the
derivative:

d**R**/ ds = **i**(dx/ds) + **j**(dy/ds) + **k**(dz/ds)

By taking the limit:

d**R**/
ds = lim_{ }D _{s}** _{ ->
0}** (D

**R**/ D s)

It can be shown one obtains a unit vector tangent to the curve at P, say, and
pointing in the direction in which arc length s increases along the curve. One
thereby arrives at the vector **T**:

**T** = d**R**/ ds

But before one can run he must learn to walk and in the same way we need to first become familiar with the basic laws of vector calculus as they apply to Euclidean space. Thus, consider the scalar product of two vectors **A** and **B **is also called the dot product because of the
dot symbol used to denote it. Thus, in general if **A, ****B** are two arbitrary vectors where Θ denotes the angle between **A** and **B **as shown below:

Then we can write**:**

**A ****·****B** = ‖**A**‖ ‖**B**‖ cos Θ,

hence also:

cos(Θ)
= (**A ****·****B**)/ ‖**A**‖ ‖**B**‖

Note that as well:

cos(Θ) = **A ****·****B / ****Ö**** ****A ****·****A ****Ö**** ****B ****·****B **

This is important in context of differential geometry

And we have in the transformed system **A ****·****B** = A1 B1 = ‖**A**‖ ‖**B**‖ cos Θ,

Thereby we see that two non-null vectors are orthogonal to each other (i.e. at 90 degrees) if and only if their scalar product vanishes. This means we have:

cos(Θ) = (**A ****·****B**)/ [**A**][**B**] =
0

Meanwhile the vector product of two vectors, i.e. **V = ****A **x** B** is defined by:

*V**= A x B =*

* *[* e*

_{1 }**e**_{ 2}^{ }**e**_{3}*]*

_{ } [* a _{1 } *a

*]*

_{ 2}^{ }**a**_{3}

[ b _{1}
b _{ 2} b _{3}]

Where **e*** _{i}* denotes a unit vector having the positive direction of the ith coordinate axis of the Cartesian coordinate system in space R

_{3}.

This is a prelude to getting a handle on right-handed and left -handed orthogonal systems of coordinates, which we also need to proceed further. Such a right-handed system (in 3 dimensions) is shown below:

*x*

_{1 }*, x*

_{2}

**,***x*) correspond to points: (1,0,0), (0,1,0) and (0, 0, 1), respectively, so each have the distance '1' from the origin. In general, a coordinate system is called right-handed if the axes assume the same sort of orientation as the thumb, index finger and middle finger of the right hand. A system is said to be left-handed if the axes assume the same sort of orientation - in their natural configuration - as the thumb, index finger and middle finger of the left hand. Thus we will have:

_{3}*x*

_{1 }*, x*

_{2}

**,***x*) for the orthogonal coordinates is more convenient than the familiar (x, y, z) since it enables the use of the more compact form

_{3}*x*for the coordinates of a point). Then any other similar Cartesian coordinate system e.g. (

_{i }*x'*

_{1 }*, x'*

_{2}

**,***x'*) will be related to the given one by a particular linear transformation of the form:

_{3}x' _{i} = å ^{3}_{ }_{b }_{=1}** **a _{ik} x _{k} + b _{i}

^{3}

_{ }

_{b }

_{=1}**a**

_{ik}a

_{il}= d

_{kl}=

{0 for k ≠ 1

{1 for k = 1

_{ik}= a

_{ik }=

(1….0…..0)

(0….1…..0)

(0….0…
..1)

_{ik}is the

*Kronecker delta*

*direct congruent transformation*.

_{ik}=

* *[* *a* ** _{11 } *a

_{12}^{ }**a**

_{13}]

[* *a* ** _{121 } *a

*a*

_{22}^{ }*]*

_{23} [ a _{31} a _{ 32}
a_{33}]

*quadratic*matrix. In general we note that a system of m

**·**n quantities arranged in a rectangular array of m horizontal rows and n vertical columns is called a

*matrix,*and the quantities are the

*elements*of the matrix. If m equals n the matrix is called 'square' and the number

*n*is the order of the matrix. Thus, the coefficients a

_{ik }for the preceding linear transformation equation form a quadratic matrix. The corresponding determinant is then:

_{ik})

*x'*_{i }_{= }

_{ x}_{i}

_{ + b}_{i }i= 1, 2, 3.....

_{b}_{i }= 0 then we find:

*x'*_{i }_{= }

_{ x}_{i}

_{ }*identical transformation*. If

_{b}_{i }= 0 and the coordinates a

_{ik }are arbitrary, but such that the two conditions (1) and (2) are satisfied, then:

_{i}= å

^{3}

_{ }

_{b }

_{=1}**a**

_{ik}x

_{k}+ b

_{i }

*direct orthogonal transformation*. Finally, note that a transformation of the form:

_{i}= å

^{3}

_{ }

_{k }

_{=1}**a**

_{ik}x

_{k , }å

^{3}

_{ }

_{i }

_{=1}**a**

_{ik}a

_{il}= d

_{kl , }

_{ik}) = -1

**Suggested Problems**:

**A**,

**B**spanning a subspace of R

^{4 }where:

**A
**= (1, 2, 1, 0) and **B **=
(1, 2, 3, 1)

Find: **A**/ ‖**A** ‖ and: **B**/ ‖**A**‖

2) Given *x'*_{i }_{= }_{ x}_{i}_{ + b}_{i}

And: *x *_{i }_{ =}

(2….1…..0)

(3….-5.….6)

(-7….0…
.4)

*if :*

*x'*_{i }

_{b}_{i}=

(1….2…..1)

(2….1.….3)

(3….2… .1)

Write out the matrix elements for a _{ik} .

*Next*: Incorporating group theory into differential geometry.

## No comments:

Post a Comment