<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>1405-7743</journal-id>
<journal-title><![CDATA[Ingeniería, investigación y tecnología]]></journal-title>
<abbrev-journal-title><![CDATA[Ing. invest. y tecnol.]]></abbrev-journal-title>
<issn>1405-7743</issn>
<publisher>
<publisher-name><![CDATA[Universidad Nacional Autónoma de México, Facultad de Ingeniería]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S1405-77432007000400006</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Solution of Rectangular Systems of Linear Equations Using Orthogonalization and Projection Matrices]]></article-title>
<article-title xml:lang="es"><![CDATA[La solución se sistemas rectangulares de ecuaciones lineales utilizando ortogonalización y matrices de proyección]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Murray-Lasso]]></surname>
<given-names><![CDATA[M.A.]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,UNAM Department of Mechanical and Industrial Engineering Facultad de Ingeniería ]]></institution>
<addr-line><![CDATA[México ]]></addr-line>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>12</month>
<year>2007</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>12</month>
<year>2007</year>
</pub-date>
<volume>8</volume>
<numero>4</numero>
<fpage>281</fpage>
<lpage>293</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;pid=S1405-77432007000400006&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_abstract&amp;pid=S1405-77432007000400006&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_pdf&amp;pid=S1405-77432007000400006&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[In this paper a novel approach to the solution of rectangular systems of linear equations is presented. It starts with a homogeneous set of equations and through linear se space considerations obtains the solution by finding the null space of the coefficient matrix. To do this an orthogonal basis for the row space of the coefficient matrix is found and this basis is completed for the whole space using the Gram-Schmidt orthogonalization process. The non homogeneous case is handled by converting the problem into a homogeneous one, passing the right side vector to the left side, letting the components of the negative of the right side become the coefficients of and additional variable, solving the new system and at the end imposing the condition that the additional variable take a unit value. It is shown that the null space of the coefficient matrix is intimately connected with orthogonal projection matrices which are easily constructed from the orthogonal basis using dyads. The paper treats the method introduced as an exact method when the original coefficients are rational and rational arithmetic is used. The analysis of the efficiency and numerical characteristics of the method is deferred to a future paper. Detailed numerical illustrative examples are provided in the paper and the use of the program Mathematica to perform the computations in rational arithmetic is illustrated.]]></p></abstract>
<abstract abstract-type="short" xml:lang="es"><p><![CDATA[En este artículo se presenta un nuevo enfoque para la solución de sistemas rectangulares de ecuaciones lineales. Comienza con un sistema de ecuaciones homogéneas y a través de consideraciones de espacios lineales obtiene la solución encontrando el espacio nulo de la matriz de coeficientes. Para lograrlo, se encuentra una base ortogonal para el espacio generado por las filas de la matriz de coeficientes y se completa la base para todo el espacio utilizando el proceso de Gram-Schmidt de ortogonalización. El caso no-homogéneo se maneja con virtiendo el problema en uno homogéneo, pasando el vector del lado derecho al lado izquierdo, usando sus componentes como coeficientes de una variable adicional y resolviendo el nuevo sistema e imponiendo al final la condición que la vari able adicional adopte un valor unitario. Se muestra que el espacio nulo de la matriz de coeficientes está íntimamente asociado con las matrices de proyección ortogonal, las cuales se construyen con facilidad a partir de la base ortogonal utilizando díadas. El artículo maneja el método introducido como un método exacto cuando los coeficientes originales son racionales, utilizando aritmética racional. El análisis de la eficiencia y características numéricas del método se pospone para un futuro artículo. Se proporcionan ejemplos numéricos ilustrativos en detalle y se ilustra el uso del programa Mathematica para hacer los cálculos en aritmética racional.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Rectangular systems of linear equations]]></kwd>
<kwd lng="en"><![CDATA[Gram - Schmidt process]]></kwd>
<kwd lng="en"><![CDATA[orthogonal projection matrices]]></kwd>
<kwd lng="en"><![CDATA[linear vector spaces]]></kwd>
<kwd lng="en"><![CDATA[dyads]]></kwd>
<kwd lng="es"><![CDATA[Sistemas rectangulares de ecuaciones lineales]]></kwd>
<kwd lng="es"><![CDATA[proceso de Gram-Schmidt]]></kwd>
<kwd lng="es"><![CDATA[matrices de proyección ortogonal]]></kwd>
<kwd lng="es"><![CDATA[espacios vectoriales lineales]]></kwd>
<kwd lng="es"><![CDATA[díadas]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[ <p align="justify"><font face="verdana" size="4">Educaci&oacute;n en ingenier&iacute;a</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="center"><font face="verdana" size="4"><b>Solution of Rectangular Systems of Linear Equations Using Orthogonalization and Projection Matrices</b></font></p>     <p align="center"><font face="verdana" size="2">&nbsp;</font></p>     <p align="center"><font face="verdana" size="3"><b>La soluci&oacute;n se sistemas rectangulares de ecuaciones lineales utilizando ortogonalizaci&oacute;n y matrices de proyecci&oacute;n</b></font></p>     <p align="center"><font face="verdana" size="2">&nbsp;</font></p>     <p align="center"><font face="verdana" size="2"><b>M.A. Murray&#150;Lasso</b></font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><i>Department of Mechanical and Industrial Engineering Facultad de Ingenier&iacute;a, UNAM, M&eacute;xico    <br> </i><b>E&#150;mail: </b><a href="mailto:mamurraylasso@yahoo.com">mamurraylasso@yahoo.com</a></font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2">Recibido: julio de 2006    <br>   Aceptado: febrero de 2006</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Abstract</b></font></p>     <p align="justify"><font face="verdana" size="2">In this paper a novel approach to the solution of rectangular systems of linear equations is presented. It starts with a homogeneous set of equations and through linear se space considerations obtains the solution by finding the null space of the coefficient matrix. To do this an orthogonal basis for the row space of the coefficient matrix is found and this basis is completed for the whole space using the Gram&#150;Schmidt orthogonalization process. The non homogeneous case is handled by converting the problem into a homogeneous one, passing the right side vector to the left side, letting the components of the negative of the right side become the coefficients of and additional variable, solving the new system and at the end imposing the condition that the additional variable take a unit value.</font></p>     <p align="justify"><font face="verdana" size="2">It is shown that the null space of the coefficient matrix is intimately connected with orthogonal projection matrices which are easily constructed from the orthogonal basis using dyads. The paper treats the method introduced as an exact method when the original coefficients are rational and rational arithmetic is used. The analysis of the efficiency and numerical characteristics of the method is deferred to a future paper. Detailed numerical illustrative examples are provided in the paper and the use of the program Mathematica to perform the computations in rational arithmetic is illustrated.</font></p>     <p align="justify"><font face="verdana" size="2"><b>Keywords:</b> Rectangular systems of linear equations, Gram &#150; Schmidt process, orthogonal projection matrices, linear vector spaces, dyads.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><i><b>Resumen</b></i></font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><i>En este art&iacute;culo se presenta un nuevo enfoque para la soluci&oacute;n de sistemas rectangulares de ecuaciones lineales. Comienza con un sistema de ecuaciones homog&eacute;neas y a trav&eacute;s de consideraciones de espacios lineales obtiene la soluci&oacute;n encontrando el espacio nulo de la matriz de coeficientes. Para lograrlo, se encuentra una base ortogonal para el espacio generado por las filas de la matriz de coeficientes y se completa la base para todo el espacio utilizando el proceso de Gram&#150;Schmidt de ortogonalizaci&oacute;n. El caso no&#150;homog&eacute;neo se maneja con virtiendo el problema en uno homog&eacute;neo, pasando el vector del lado derecho al lado izquierdo, usando sus componentes como coeficientes de una variable adicional y resolviendo el nuevo sistema e imponiendo al final la condici&oacute;n que la vari able adicional adopte un valor unitario.</i></font></p>     <p align="justify"><font face="verdana" size="2"><i>Se muestra que el espacio nulo de la matriz de coeficientes est&aacute; &iacute;ntimamente asociado con las matrices de proyecci&oacute;n ortogonal, las cuales se construyen con facilidad a partir de la base ortogonal utilizando d&iacute;adas. El art&iacute;culo maneja el m&eacute;todo introducido como un m&eacute;todo exacto cuando los coeficientes originales son racionales, utilizando aritm&eacute;tica racional. El an&aacute;lisis de la eficiencia y caracter&iacute;sticas num&eacute;ricas del m&eacute;todo se pospone para un futuro art&iacute;culo. Se proporcionan ejemplos num&eacute;ricos ilustrativos en detalle y se ilustra el uso del programa Mathematica para hacer los c&aacute;lculos en aritm&eacute;tica racional.</i></font></p>     <p align="justify"><font face="verdana" size="2"><i><b>Descriptores: </b>Sistemas rectangulares de ecuaciones lineales, proceso de Gram&#150;Schmidt, matrices de proyecci&oacute;n ortogonal, espacios vectoriales lineales, d&iacute;adas.</i></font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Introduction</b></font></p>     <p align="justify"><font face="verdana" size="2">The problem of solving a set of linear equations is central in both theoretical and applied mathematics because of the frequency with which it appears in theoretical considerations and applications. It appears in statistics, ordinary and partial differential equations, in several areas of physics, engineering, chemistry, biology, economics and other social sciences, among others. For this reason it has been studied by many mathematicians and practitioners of the different fields of application. Mathematicians of great fame such as Gauss, Cramer, Jordan, Hamilton, Cayley, Sylvester, Hilbert, Turing, Wilkinson and many others have made important contributions to the topic. Many numerical methods for the practical solution of simultaneaous linear equations have been deviced. (Westlake, 1968) Although some of them are reputedly better than others, this depends very much on the size and structure of the matrices that appear. For example, for very large matrices stemming from partial differential equations iterative methods are generally preferred over direct methods.</font></p>     <p align="justify"><font face="verdana" size="2">Although problems with square matrices are the ones most often treated, in this paper the problem with a rectangular matrix of coefficients is the target, the former one to be considered a particular case of the more general case.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>The Homogeneous Case</b></font></p>     <p align="justify"><font face="verdana" size="2">Consider the following homogeneous system of <i>m </i>linear equations in <i>n </i>variables.</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e1.jpg">...............................................................(1)</font></p>     <p align="justify"><font face="verdana" size="2">which can be written in matrix form</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e2.jpg">...........................................................(2)</font></p>     <p align="justify"><font face="verdana" size="2">or more complactly</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e3.jpg">..................................................................................................(3)</font></p>     <p align="justify"><font face="verdana" size="2">In equations (1) and (2) the <i>a<sub>ij </sub></i> are rational numbers. Let us concentrate in the case <i>m &lt; n. </i>Others are mentioned in the Final Remarks. If we consider the rows of matrix <b>A</b> in (2) as the representation with respect to a natural orthonormal basis of the form &#91;1, 0, 0,..., 0&#93;, &#91;0, 1, 0,..., 0&#93;,..., &#91;0, 0,...0,1&#93; of <i>m   n</i>&#150;dimensional  vectors  which  span  a subspace of <i>R<sup>n</sup>, </i>the domain of <b>A</b>, for which an inner product is defined by</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e4.jpg">......................................................(4)</font></p>     <p align="justify"><font face="verdana" size="2">What equation (2) is expressing is that the solution vector <b>x</b> must be orthogonal to the <i>r&#150;</i>dimensional subspace spanned by the rows of matrix <b>A</b>, where <i>r </i>is the rank of matrix <b>A</b> which is equal to the number of linearly independent rows of <b>A</b>.    From the theorem that states that (Bentley and Cooke, 1973).</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e5.jpg">...................................................................(5)</font></p>     <p align="justify"><font face="verdana" size="2">where <i>R(<b>A</b><sup>T</sup>)</i> is the range of the transpose of matrix <b>A</b>, and <b><i>N(A)</i></b> is the null space of A. We can deduce that the null space of <b>A</b>, which is the solution of equation (3), has dimension <i>n&#150;r </i>and is orthogonal to the range of matrix <i><b>A<sup>T</sup></b></i> which is the subspace spanned by the rows of matrix <b>A</b>.</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">Hence, what we need to do to express the solution of equation (3) is to to characterize the subspace <b><i>N(A)</i></b>. One way of doing this is to find a basis which spans it. From the theorem that states that for any inner product vector space (or subspace) for which we have a basis we can find an orthonormal basis for it through the Gram&#150;Schmidt process (Bentley and Cooke, 1973). We intend to first find an orthonormal basis for the subspace spanned by the rows of matrix <b>A</b> using the Gramm&#150;Schmidt process and then find its orthogonal complement obtaining therefore the solution subspace of equation (3). For this we will need orthogonal projection matrices which are easy to find when we have an orthonormal basis.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>The Gram&#150;Schmidt Process</b></font></p>     <p align="justify"><font face="verdana" size="2">Given a set of <i>r </i>linearly independent vectors, the Gram &#150; Schmidt Process finds recursively a  sequence   of  orthogonal  bases   for   the subspaces spanned by: the first; first and second; ... , first, second, ... , and r&#150;th vectors. It accomplishes this by taking the first vector and using it as the first basis. It then takes the second vector and finds a vector that is orthogonal to the first by subtracting from the second vector its orthogonal projection on the first vector. The result is taken as the second vector of the orthogonal basis. This orthogonal basis spans the same subspace as the first two of the originally given linearly independent vectors. To get a third orthogonal vector the orthogonal projections of the third given vector upon the first two orthogonal vectors are subtracted from the third given vector. The result is orthogonal to both previous orthogonal vectors. The first three orthogonal vectors span the same subspace as the first three given vectors. The process is continued until all <i>r </i>given vectors are processed and an orthogonal basis for the <i>r</i>&#150;dimensional subspace spanned by the given vectors is found. If it is desired to obtain an orthonormal basis, each of the orthogonal vectors can be normalized by dividing it by its length.</font></p>     <p align="justify"><font face="verdana" size="2">If we call <b>w<sub>i</sub></b> the <i>i</i>&#150;th orthogonal vector of the final basis; <b>u<sub>i</sub> </b>the <i>i</i>&#150;th normalized orthogonal vector, and <b>v<sub>i</sub> </b> the <i>i</i>&#150;th given vector, the process can be described symbollically as follows:</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e6.jpg">...................................................................(6)</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>The Angular Brackett or Dirac Notation</b></font></p>     <p align="justify"><font face="verdana" size="2">In equations (6) and in the definition of the inner product we are using some aspects of a notation introduced by Dirac in his book on Quantum Mechanics (Dirac, 1947). We can think of vectors as row or column n&#150;tuples which can be added and multiplied among themselves, as well as multiplied by scalars according to the rules governing matrices. We exhibit in equation (7) a sample of multiplication operations, recalling that matrix multiplication is non&#150;commutative</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e7.jpg">................................(7)</font></p>     <p align="justify"><font face="verdana" size="2">We also recall that multiplication of a matrix by a scalar does commute and that the associative law is valid both for addition and multiplication of matrices.</font></p>     <p align="justify"><font face="verdana" size="2">In the Dirac notation a row matrix <b>x</b> is represented by the  symbol <b>&lt; x</b>, while a column matrix is represented by <b>x &gt;</b>. (The row and column vectors do have significance in physical applications because they transform differently on changes of bases; they correspond to covariant and contravariant vectors and in some notations they are distinguished by making the indices super&#150;indices or sub&#150;indices). The inner product of two vectors corresponds to the first of equations (7) and is represented by <b>&lt;x, y&gt;</b>, a scalar, while the outer product of the same vectors (corresponding to the multiplication of a column matrix by a row matrix in that order) is represented by <b>x&gt;&lt;y</b> which is an <i>nxn </i>matrix and corresponds to the second of equations (7). Notice that the matrix has rank equal to unity since all columns are multiples of the column vector (Friedman, 1956).</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Dyads</b></font></p>     <p align="justify"><font face="verdana" size="2">The symbol <b>x&gt;&lt;y</b>  is a linear transformation (it is represented by a matrix) and so is the sum of several such symbols; they are called <i>dyads, </i>(when emphasis is put on the number of symbols involved it is sometimes called a <i>one&#150;term dyad </i>or an <i>n &#150; term dyad.</i>) Since we can think of a dyad as a matrix, it can be multiplied on the left by a row vector and on the right by a column vector, or multiplied either on the right or on the left by another dyad. Hence we have:</font></p>     <p align="justify"><font face="verdana" size="2"><b>&lt; z ( x &gt; &lt; y ) =  ( &lt; z, x &gt; ) &lt; y</b>   (a scalar multiplied by a row vector giving a row vector proportional to <b>y</b>.) It is usually written <b>&lt; z, x &gt; y, ( x &gt; &lt; y ) z &gt; = x &gt; ( &lt; y, z &gt;) = ( &lt; y, z &gt;) x &gt;</b> (a scalar multiplied by a column vector giving a column vector proportional to x). It is usually written <b>&lt; y, z &gt; x</b>.</font></p>     <p align="justify"><font face="verdana" size="2">We used the property that scalars and matrices commute in multiplication. The results can be obtained by observing the way brackets open and close when terms are written in yuxtaposition. For this reason the symbol &lt; is associated with the word "bra" while the symbol &gt; is called "ket" and one looks for the appearance of the word "braket" to discover the presence of an inner product, while a "ketbra" corresponds to a dyad (Goertzel and Tralli, 1960).</font></p>     <p align="justify"><font face="verdana" size="2">Very often one uses an orthonormal basis to work with. When such is the case, an important formula that will be useful in the sequel is the following:</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e8.jpg">...............................................(8)</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">where <b>u<sub>1</sub>, u<sub>2</sub>, ... , u<sub>n</sub></b> are the orthonormal basis vectors for an <i>n</i>&#150;dimensional inner product vector space and <b>I<sub>n</sub></b> is the <i>unit operator </i>for the <i>n</i>&#150;dimensional space which is to be associated with an <i>nxn </i>unit matrix and maps each vector into itself. The va lidity of equation (8) can be verified by forming an orthogonal matrix <b>Q</b> with the <b>u</b>'s as columns and multiplying it by its transpose <b>Q<sup>T</sup></b> and using the fact that the transpose of an orthogonal matrix is its inverse. and hence <b>QQ<sup>T</sup> = Q<sup>T</sup>Q = I</b>. The right hand side of (8) can be obtained by partitioning the matrix <b>Q</b> into submatrices that coincide with columns and the matrix <b>Q<sup>T </sup></b>into submatrices that coincide with rows as shown in equations (9).</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e9.jpg">...................................(9)</font></p>     <p align="justify"><font face="verdana" size="2">Equation (9) is called "the resolution of the identity" (Smirnov, 1970).</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Orthogonal Projections</b></font></p>     <p align="justify"><font face="verdana" size="2">Whenever we use an orthonormal basis and represent vectors with <i>n</i>&#150;tuples, the components of the <i>n</i>&#150;tuples are the projections of the vector upon the coordinate axes whose orientations are those of the unit vectors of the basis. If we consider a vector as an arrow that goes from the origin of the coordinate system to the tip of the arrow, the projections coincide with the coordinates of the point located at the tip of the arrow. If we now take as basis the original basis rotated rigidly around the origin, leaving the arrow in its original   position,   the   new   n&#150;tuple   that represents the arrow has as components the projections of the arrow upon the new coordinate axes. Although in an <i>n&#150; </i>dimensional space our intuition is not as good as in ordinary 2 or 3 dimensions, the inner product of vectors helps us to solve problems analytically. Extending concepts from 2 and 3 dimensions to <i>n </i>dimensions, we define the cosine of the angle <b>&theta;</b> between two vectors represented by <i>n</i>&#150;tuples of coordinates in an orthonormal basis by the following formula</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e10.jpg">............................................(10)</font></p>     <p align="justify"><font face="verdana" size="2">To find the projection of the vector <b>u<sub>1 </sub></b> upon the line oriented in the direction of <b>u<sub>2</sub></b> shown enhanced in <a href="#f1">Figure 1</a> we use the formula for the cosine of the angle between the two vectors given in equation (10) and obtain</font></p>     <p align="center"><font face="verdana" size="2"><a name="f1"></a></font></p>     <p align="center"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6f1.jpg"></font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e11.jpg">...............................................(11)</font></p>     <p align="justify"><font face="verdana" size="2">In the case that <b>u<sub>2</sub></b> is a unit vector, its length  is   one   and   we   can  remove   the denominator   from   equation   (11).   Additionally if we want the proyection to be a vector in the direction of <b>u<sub>2</sub></b> we have</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e12.jpg">...............................................................................(12)</font></p>     <p align="justify"><font face="verdana" size="2">We have considered the projection of a vector upon an oriented line. One can also think of the projection of a vector upon a plane. A simple way of finding the projection of a vector upon a plane spanned by two ortho&#150;normal vectors is to find the projections upon the orthonormal vectors and vectorially add them. The idea can be extended in a straight forward manner to more dimensions. Thus to find the proyection of a vector <b>u</b> upon a <i>k&#150;</i>dimensional subspace spanned by orthonormal vectors <b>v<sub>1</sub>, v<sub>2</sub>,...,v<sub>k</sub></b> we can use the following expresion</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e13.jpg">...................................(13)</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Orthogonal Projection Matrices</b></font></p>     <p align="justify"><font face="verdana" size="2">To solve the system of linear equations (3) we must characterize the null space of matrix <b>A</b>, which is the orthogonal complement of the space spanned by the its rows. The strategy we will use is to first find an orthonormal basis which spans the <i>k</i>&#150;dimensional row space of <b>A</b>. For this purpose we utilize the Gram&#150;Schmidt process for orthogonalizing the vectors represented by the rows of <b>A</b>. The set of orthonormal vectors thus obtained will span the same subspace as that spanned by the rows of <b>A</b>. The solution of the problem is the complementary subspace orthogonal to the one found. There are several possibilities for specifying this subspace. The simplest conceptually is to complete the orthonormal basis obtained with additional vectors to obtain an orthonormal basis for the <i>n&#150; </i>dimensional space. This can always be done (Cullen, 1966). One way of doing it is to append to the rows of <b>A</b> a set of <i>n </i>linearly independent vectors and apply the Gram&#150;Schmidt process to the complete set of vectors. A linearly independent set for the whole space which can be used for this purpose is the set {&#91;1, 0, 0,..., 0&#93;, &#91;0, 1, 0,..., 0&#93;, ... &#91;0, 0, 0,..., 1&#93;}. When applying the Gram&#150;Schmidt process, if a vector is linearly dependent upon the previous vectors processed, the resulting vector will be the zero vector. The corresponding vector can be eliminated and the process continued with the rest of the vectors. If the rank of matrix <b>A</b> is <i>k, </i>then the first <i>k </i>vectors obtained in the process will span the space of the rows of <b>A</b> and the vectors <i>k + 1, k + 2, ... , n </i>will span the null space of <b>A</b>, the subspace sought. Each individual solution of equation (3) will then be an arbitrary linear combinations of these vectors. We now consider another approach based on orthogonal projection matrices.</font></p>     <p align="justify"><font face="verdana" size="2">We define an <i>orthogonal projection matrix </i>E as a <i>idempotent symmetric square matrix, </i>that satisfies the following two conditions: (Smirnov, 1970).</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e14.jpg">......................................................................................(14)</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">We now establish that the dyad <b>v &gt; &lt; v</b>, <img src="/img/revistas/iit/v8n4/a6s1.jpg">=1, is a projection matrix which can be expressed as a matrix <b>V</b> satisfying equations (14).</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e14a.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">where we have applied the commutativity law to the scalars <i>v<sub>i</sub>, i = 1, 2,...n </i>which are the components of <b>v</b>. The matrix is obviously symmetric by ispection. Additionally</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e15.jpg">...................(15)</font></p>     <p align="justify"><font face="verdana" size="2">In equation (15) we used the associativity law of matrices, the commutativity of the product of a scalar (the inner product) with a matrix and the fact that <img src="/img/revistas/iit/v8n4/a6s2.jpg">. This establishes that <b>V</b> is idempotent and therefore an orthogonal projection matrix.</font></p>     <p align="justify"><font face="verdana" size="2">According to equation (12) the matrix <b>V</b> transforms any column vector into its projection upon the vector <b>v</b>.</font></p>     <p align="justify"><font face="verdana" size="2">We now establish that the sum of <i>r </i>one&#150;term dyads of the kind appearing in equation (15), where the vectors of the one&#150;term dyads are the members of an orthonormal basis spanning a subspace <b>R</b> of dimension equal to the number of vectors in the basis, is also a projection matrix <b>W</b> that transforms any column vector into its projection upon the subspace <b>R</b>. To establish that the matrix is symmetric we proceed as before noting that the resultant matrix is the sum of symmetric matrices and is therefore symmetric. To establish idempotency we find the square of the matrix</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e15a.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">When the mulyiplications are carried out, terms of the following nature are obtained</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e15b.jpg"></font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">where we have commuted the inner product, which is a scalar with the vector on its left. If p&ne;q the inner product is zero, since it involves vectors which are orthogonal. If p= q then the inner product is unity since the vectors are normalized. Therefore only the terms squared remain and we obtain</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e15c.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">since according to equation (15) the terms squared are equal to the terms without the exponent, therefore</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e16.jpg">...............................................................................................(16)</font></p>     <p align="justify"><font face="verdana" size="2">and the two conditions establish <b>W</b> as an orthogonal projection matrix.</font></p>     <p align="justify"><font face="verdana" size="2">We now establish that the matrix <b>I&#150;W</b> is an orthogonal projection matrix that maps any vector into its projection into the orthogonal complement sub space of the image subspace of <b>W</b>.</font></p>     <p align="justify"><font face="verdana" size="2">That <b>I&#150;W</b> is symmetric is a consequence of the fact that both the unit matrix <b>I</b> and the matrix <b>W</b> are symmetric. Now let the <i>n&#150;</i>dimensional space <b>S</b> be the domain of a linear transformation represented with respect to the natural basis {&#91; 1, 0 , ... , 0&#93;, &#91;0, 1, ... , 0&#93;, ... &#91;0, 0, ... , 1&#93;} by a matrix <b>A</b>. The space <b>S</b> can be written as the direct sum of the subspace <b>R</b> spanned by the rows of <b>A</b> (the range of <b>A<sup>T</sup></b>) and the null space <b>N</b> of <b>A</b> (Zadeh and Desoer, 1963)</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e17.jpg">............................................................................................(17)</font></p>     <p align="justify"><font face="verdana" size="2">The subspace <b>N</b> is the orthogonal complement of <b>R</b>, hence every vector of <b>R</b> is orthogonal to every vector of <b>N</b>. Now suppose we have an orthonormal basis <b>{v<sub>1</sub>, v<sub>2</sub>, ... ,v<sub>n</sub>}</b> such that the first r vectors span the row space of <b>A</b> and vectors <i>r + 1, r + 2, ... , n </i>span the null space of <b>A</b>. An arbitrary vector <b>u</b> can be written uniquely as the sum of two vectors, one in subspace <b>R</b> and the other in subspace <b>N</b> in terms of the orthonormal basis as follows:</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e17a.jpg"></font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">If we call T the <i>(n &#150; k) </i>&#150; term dyad associated with the quantity enclosed by the second set of parentheses we get.</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e17b.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">If we calculate <b>T<sup>2</sup></b> we will find <b>T = T<sup>2</sup></b> for the same reasons that in equation (16) we found that <b>W<sup>2</sup> = W</b>. We conclude that <b>I &#150; W</b> is indeed an orthogonal projection matrix with the stated property of orthogonally projecting vectors in the domain of <b>A</b> unto the orthogonal complement of the row space of matrix <b>A</b>, that is, into the null space of <b>A</b>.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Back to the Homogeneous Linear Equations</b></font></p>     <p align="justify"><font face="verdana" size="2">Summing up, the method of solution of equation (3) using orthogonal projection matrices consists of finding orthonormal vectors <b>v<sub>1</sub>, v<sub>2</sub>, ... , v<sub>r</sub></b>, where <i>r </i>is the rank of matrix <b>A</b> that span the row space of <b>A</b>. This can be done using the Gram&#150;Schmidt process. Next we form the matrix</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e17c.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">The solution of equations (3) is the subspace given by</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e18.jpg">................................................................................................(18)</font></p>     <p align="justify"><font face="verdana" size="2">where <b>y</b> is an arbitrary vector in the domain of <b>A</b>.</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">A second method consists of finding an orthonormal basis for the domain of <b>A</b> by orthogonalizing a set of vectors formed by: first the rows of <b>A</b> appended with enough linearly independent vectors so that the set contains <i>n </i>linearly independent vectors. and orthogonalizing the set (discarding any zero vectors that may appear in the process) to obtain an orthonormal basis <b>{v<sub>1</sub>, v<sub>2</sub>, ..., v<sub>k</sub>, v<sub>k+1</sub>, ... , v<sub>n</sub>}</b> for the domain of A. The solution of equations (3) is given by</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e19.jpg">...........................................................(19)</font></p>     <p align="justify"><font face="verdana" size="2">where <b>c<sub>1</sub>, c<sub>2</sub>, ... , c<sub>n</sub></b> are arbitrary scalars.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Numerical Illustrative Example</b></font></p>     <p align="justify"><font face="verdana" size="2">Consider the following homogeneous system of linear equations (Hildebrand, 1952)</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e20.jpg">.........................................................................(20)</font></p>     <p align="justify"><font face="verdana" size="2">The matrix <b>A</b> is therefore </font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e21.jpg">.........................................................................(21)</font></p>     <p align="justify"><font face="verdana" size="2">The matrix is 3x4 and the rank of the matrix is 2, since the third row is equal to the second row minus the third. However, we will proceed as though we didn't know the third row is linearly dependent on the first two and let the process discover this fact.</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">We now proceed to orthogonalize the rows of <b>A</b>. Let the rows of <b>A</b> be <b>a<sub>1</sub>, a<sub>2</sub>, a<sub>3</sub></b> and let <b>q<sub>1</sub>,... , q<sub>r</sub></b>, where <i>r </i>is the rank of <b>A</b>, be the orthogonalized vectors. For numerical reasons   it   is   more   convenient   not   to normalize the vectors <b>q<sub>i</sub></b> in order to avoid taking square roots which might introduce unnecessary irrational numbers.</font></p>     <p align="justify"><font face="verdana" size="2">We take the first row as the first orthogonalized vector</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e21a.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">The second orthogonalized vector is equal to the second row minus the projection of the second row on the first othogonalized vector, that is</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e21b.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">We now seek the third orthogonalized vector which is equal to the third row minus the sum of its projections on the first and second orthogonalized vectors, that is</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e21c.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">We have obtained the zero vector for the third orthogonalized vector. This shows that the third row of <b>A</b> is linearly dependent on the first two rows, thus the third row can be ignored and we have finished the orthogonalization process. The two vectors obtained (not normalized) are <b>q<sub>1</sub></b> and <b>q<sub>2</sub></b> . We can now find the matrix <b>T</b>.</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e21d.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">The solution to equations (20) is <b>x<sub>s</sub> = T y</b> where y is an arbitrar <b>y</b> vector in the domain of <b>A</b>. For example if we choose <b>y</b> = &#91;5, 0, 0, 0&#93; we obtain the solution <b>x<sub>s</sub></b> = &#91;2, &#150;1, &#150;2, 1&#93;; while if we choose y = &#91;0, 5, 0, 0&#93; we obtain the solution <b>x<sub>s</sub></b> = <sup>..</sup>&#91;&#150;1, 3, 1, 2&#93;. The reader can easily verify that both solutions satisfy equations (20). The two solutions obtained are linearly independent, therefore they span the two dimensional null space of matrix <b>A</b>, (the dimensionality of the null space of <b>A</b> coincides with the rank of matrix <b>T</b> and is also equal to <i>n </i>minus the rank of matrix <b>A</b>.) This means we can write the solution of equations (20) as an arbitrary linear combination of the two individual solutions found</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e22.jpg">...........................................................(2)</font></p>     <p align="justify"><font face="verdana" size="2">where <b>c<sub>1</sub> </b>and <b>c<sub>2</sub></b> are arbitrary scalars.</font></p>     <p align="justify"><font face="verdana" size="2">We could also find the solution without resorting to projection matrices by finding two vectors linearly independent of the rows of matrix <b>A</b> and orthogonal to them. (The orthogonality is guaranteed for vectors that are transformed by matrix <b>T</b>). As we mentioned in the text, one possibility is to append candidate vectors to the rows of <b>A</b> and orthogonalize the whole set ignoring those vectors which give the zero vector. If the vectors <b>a<sub>4</sub></b> = &#91;1, 0, 0, 0&#93;, <b>a<sub>5</sub></b> = &#91;0, 1, 0, 0&#93;, <b>a<sub>6</sub></b> = &#91;0, 0, 1, 0&#93; and <b>a<sub>7</sub></b><i>= </i>&#91;0, 0, 0, 1&#93; are appended to the rows of <b>A</b>, we can guarantee that at least 4 vectors of the set <b>{a<sub>i</sub>}</b><i>, </i><i>i = 1, 2, ... , 7</i> are linearly independent. The orthogonalization would proceed in the same way as applied to the three rows of matrix <b>A</b>, its third row being deleted as before. The orthogonalization of the next vector <b>a<sub>4</sub></b> = &#91;1, 0, 0, 0&#93; yields</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e22a.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">Likewise <b>q<sub>4</sub></b> would be</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e22b.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">The reader can verify that <b>q<sub>1</sub>, q<sub>2</sub>, q<sub>3</sub>, q<sub>4</sub></b> are mutually orthogonal by checking that their inner products are zero when the indices are different. Once we have found four orthogonal vectors we can stop the process, since necessarily the last two vectors, being part of an orthonormal basis, will be linearly dependent with respect to the ones found. With <b>q<sub>3</sub></b> and <b>q<sub>4</sub></b> we can write the solution to equations (20) in the following manner</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e22c.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">(The denominators in <b>q<sub>3</sub></b> and <b>q<sub>4</sub></b> can be incorporated into the c's becoming c primes and be eliminated). Although the solutions have some differences in appearance, the pairs of vectors span the same subspace so both solutions are equivalent.</font></p>     <p align="justify"><font face="verdana" size="2">There can be variations to the techniques applied. For example, instead of appending the elements of the natural basis and continuing the process of orthogonalization, we can start a new orthogonalization process for  the  null  space  in  which  additional vectors are obtained by multiplying <b>T</b> by arbitrary vectors <b>y<sub>1</sub>, y<sub>2</sub> </b>obtaining vectors <b>z<sub>1</sub>, z<sub>2</sub></b>. The first vector of the new orthogonalization process can be taken equal to z 1, the next one can be obtained by multiplying z <sub>2</sub> by the matrix</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e22d.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">The vector <b>z<sub>1</sub></b> is guaranteed to be orthogonal to <b>q<sub>1</sub> </b>and <b>q<sub>2</sub></b> and <b>z<sub>2</sub></b> multiplied by U will be orthogonal to <b>q<sub>1</sub>, q<sub>2</sub></b>, and <b>z<sub>1</sub></b>. We leave further details to the reader.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>The Non&#150;Homogeneous Case</b></font></p>     <p align="justify"><font face="verdana" size="2">So far we have considered only sets of equations with the right hand vector equal to zero. We now consider non &#150; homogeneous systems of equations such as</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e23.jpg">.............................................................................................(23)</font></p>     <p align="justify"><font face="verdana" size="2">where <b>A*</b> is an <i>m x (n &#150; 1) </i>matrix, <b>x*</b> is an (n &#150; <i>1) </i>&#150; vector and   <b>b</b>  is an <i>m </i>&#150; vector different  from  zero.   This  system  can be converted to  a homogeneous  system by means of a simple trick. Subtract from both sides of the equation the vector <b>b</b> leaving zero in the right hand side and absorbing the vector <b>&#150; b</b> on the left side as an additional column of matrix <b>A*</b> converting it to an m ft matrix <b>A</b>, adding a variable<i> x<sub>n</sub> </i>to the vector <b>x* </b>to convert it into an   <i>n</i>&#150;vector <b>x    </b>and changing    the system to a homogeneous system of the form of equation (3).</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e3.jpg">.................................................................................................(3)</font></p>     <p align="justify"><font face="verdana" size="2">Equations (23) and (3) are equivalent if to equation (3) we add the restriction x n = 1, a condition that can be imposed after the solution of (3) has been obtained. To obtain the solution of equation (3) we apply the methods that have been given in the paper. Equation (3) always has a solution (the zero vector is always a solution) while equation (23) may not have a solution; this situation arises when it is not possible to apply the condition x<sub>n</sub> = 1 at the end, which can happen if all solutions of equation (3) give values to x<sub>n</sub> different from 1, making it impossible to apply the restriction required.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><b>Illustrative Example of a Non &#150;Homogeneous System of Equations</b></font></p>     <p align="justify"><font face="verdana" size="2">Consider   the   following   system   of  linear equations (Hadley, 1969).</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e24.jpg">..............................................................................(24)</font></p>     <p align="justify"><font face="verdana" size="2">Equations (24) have the unique solution <i>x<sub>1</sub> = 1, x<sub>2</sub> = 2, x <sub>3</sub> = 3</i>. This can be obtained easily by using Cramer's Rule. The augmented matrix of the system is</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e25.jpg">...........................................................................(25)</font></p>     <p align="justify"><font face="verdana" size="2">We will use rational arithmetic to obtain exact results with the aid of the program <i>Mathematica </i>(Wolfram,  1991).  The instructions in <i>Mathematica </i>for the necessary calculations are:</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6p1.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">The assignment operator in Mathematica is : = . Notice that rectangular matrices are provided to the program as a list of lists, each list is enclosed in braces. Vectors are provided as a single list enclosed in braces. a&#91;&#91;i&#93;&#93; denotes the <i>i </i>&#150; th row of matrix a. Inner products of vectors and as well as the product of rectangular matrices is indicated by a dot between the operands. The addition or subtraction of matrices uses the operators "+" and "&#150;" between the matrices. Products of scalars and vectors or matrices are indicated by yuxta&#150; position. A dyad <i>vec1&gt;&lt;vec2 </i>can be constructed using the function Outer &#91;Times, vec1, vec2&#93; and the result is a square matrix. A unit matrix of order <i>n </i>is generated by the function IdentityMatrix&#91;<i>n</i>&#93;. Division between two scalars uses the symbol /.</font></p>     <p align="justify"><font face="verdana" size="2">When <b>15T</b> is written in the conventional manner we have</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e25a.jpg"></font></p>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">Absorving the factor 15 into the arbitrary vector <b>y</b> when we multiply <b>T</b> by <b>y</b> we obtain an answer <b>x<sub>s</sub></b> which is always a multiple of the vector &#91;1, 2, 3, 1&#93; (This is due to the fact that all columns of <b>T</b> are multiples of this vector, since <b>T</b> has rank 1), therefore</font></p>     <p align="justify"><font face="verdana" size="2"><img src="/img/revistas/iit/v8n4/a6e25b.jpg"></font></p>     <p align="justify"><font face="verdana" size="2">By choosing <i>k = </i>1 we satisfy the condition that <i>x<sub>n</sub> = </i>1, hence the unique solution to equations (24) is the vector &#91;1, 2, 3&#93;.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>Final Remarks</b></font></p>     <p align="justify"><font face="verdana" size="2">We have given a novel method for solving rectangular homogeneous systems of linear equations. The method is based on applying the Gram&#150;Schmidt process of orthogonalization to the rows of the matrix and either calculating an orthogonal projection matrix and multiplying it by an arbitrary vector, or continuing the orthogonalization process to calculate a basis for the null space of the matrix of coeficients of the original system from which a general solution to the problem is obtained. Although in the paper the discussion was carried out as though the matrix of coefficients had less rows than columns, what is important is the rank of the matrix and the number of variables (dimension of the domain space). Since a homogeneous system always satisfies the trivial zero solution, there arises no question relative to the existance or not of a solution. If the rank <i>r</i> of the matrix is equal to the dimension <i>n </i>of the domain, the zero solution is the only solution, otherwise the rank is less than the dimension of the domain and the solution is the null space of the matrix of coefficients. This null space has dimension <i>n &#150; r. </i>To handle non&#150; homogeneous systems of equations the paper gives a simple trick for converting the problem into an equivalent homogeneous problem whose domain has a dimension larger than the original by one unit. Illustrative numerical examples are given in the paper.</font></p>     <p align="justify"><font face="verdana" size="2">No discussion of the numerical properties of the method nor of its computational efficiency compared to other methods is given in the paper, this is left for a future paper. The illustrative examples are solved exactly using rational arithmetic, thus no approximations are used. The use of the program <i>Mathematica </i>as an aid in the calculations in rational arithmetic is exhibited in one of the illustrations and the actual instructions are shown together with results.</font></p>     <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b>References</b></font></p>     <!-- ref --><p align="justify"><font face="verdana" size="2">Bentley   D.L.   and   Cooke   K.L.   (1973). <i>Linear Algebra with Differential Equations. </i>Holt, Rinehart and Winston, Inc., New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239450&pid=S1405-7743200700040000600001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Cullen C. (1966). <i>Matrices and Linear Transformations.    </i>Addison&#150;Wesley    Publishing Company, Reading, MA.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239451&pid=S1405-7743200700040000600002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Dirac P.A.M. (1947). <i>The Principles of Quantum Mechanics. Third Edition, </i>Oxford Uiversity Press, New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239452&pid=S1405-7743200700040000600003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Friedman B. (1956). <i>Principles and Techniques of Applied Mathematics. </i>John Wiley &amp; Sons, Inc., New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239453&pid=S1405-7743200700040000600004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Goertzel G. and Tralli N. (1960). <i>Some Math e matical Methods of Physics. </i>McGraw&#150;Hill Book Company, Inc., New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239454&pid=S1405-7743200700040000600005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Hadley G. (1969). <i>&Aacute;lgebra lineal. </i>Fondo Educativo Interamericano, S.A., Bogot&aacute;.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239455&pid=S1405-7743200700040000600006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Hildebrand F.B. (1952). <i>Methods of Applied Mathematics. </i>Prentice&#150;Hall, Inc., Englewood Cliffs, NJ.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239456&pid=S1405-7743200700040000600007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Smirnov V.I. (1970). <i>Linear Algebra and </i><i>Group Theory. </i>Dover Publications, Inc., New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239457&pid=S1405-7743200700040000600008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Westlake   J.R.    (1968). <i>A   Handbook   of </i><i>Numer ical Matrix Inversion and Solution </i><i>of Linear Equations. </i>John Wiley &amp; Sons, Inc., New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239458&pid=S1405-7743200700040000600009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Wolfram S. (1991). <i>Mathematica: A System </i><i>for  Doing  Mathematics   by   Computer, </i><i>Second Edition. </i>Addison&#150;Wesley Publishing Company, Inc., Redwood City, CA.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239459&pid=S1405-7743200700040000600010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Zadeh L.A. y Desoer C.A. (1963). <i>Linear </i><i>System Theory: The State Space Approach. </i>McGraw&#150;Hill    Book    Company,   Inc., New York.</font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=4239460&pid=S1405-7743200700040000600011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><p align="justify"><font face="verdana" size="2">&nbsp;</font></p>     <p align="justify"><font face="verdana" size="2"><b><a name="a1"></a>Semblanza del autor</b></font></p>     <p align="justify"><font face="verdana" size="2"><i>Marco Antonio Murray&#150;Lasso. </i>Realiz&oacute; la licenciatura en ingenier&iacute;a mec&aacute;nica&#150;el&eacute;ctrica en la Facultad de Ingenier&iacute;a de la UNAM. El Instituto de Tecnolog&iacute;a de Massachussetts (MIT) le otorg&oacute; los grados de maestro en ciencias en ingenier&iacute;a el&eacute;ctrica y doctor en ciencias cibern&eacute;ticas. En M&eacute;xico, ha laborado como investigador en el Instituto de Ingenier&iacute;a y como profesor en la Facultad de Ingenier&iacute;a (UNAM) durante 45 a&ntilde;os; en el extranjero, ha sido asesor de la NASA en dise&ntilde;o de circuitos por computadora para aplicaciones espaciales, investigador en los Laboratorios Bell, as&iacute; como profesor de la Universidad Case Western Reserve y Newark College of Engineering, en los Estados Unidos. Fue el presidente fundador de la Academia Nacional de Ingenier&iacute;a de M&eacute;xico; vicepresidente y pres i dente del Consejo de Academias de Ingenier&iacute;a y Ciencias Tecnol&oacute;gicas (organizaci&oacute;n mundial con sede en Washington que agrupa las Academias Nacionales de Ingenier&iacute;a) y secretario de la Academia Mexicana de Ciencias. Actualmente es jefe de la Unidad de Ense&ntilde;anza Auxiliada por Computadora del Departamento de Ingenier&iacute;a de Sistemas, Divisi&oacute;n de Ingenier&iacute;a Mec&aacute;nica e Industrial de la Facultad de Ingenier&iacute;a de la UNAM. Miembro de honor de la Academia de Ingenier&iacute;a y de la Academia Mexicana de Ciencias, Artes, Tecnolog&iacute;a y Humanidades.</font></p>      ]]></body><back>
<ref-list>
<ref id="B1">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bentley]]></surname>
<given-names><![CDATA[D.L]]></given-names>
</name>
<name>
<surname><![CDATA[Cooke]]></surname>
<given-names><![CDATA[K.L]]></given-names>
</name>
</person-group>
<source><![CDATA[Linear Algebra with Differential Equations]]></source>
<year>1973</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[Holt, Rinehart and Winston, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B2">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cullen]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<source><![CDATA[Matrices and Linear Transformations]]></source>
<year>1966</year>
<publisher-loc><![CDATA[^eMA MA]]></publisher-loc>
<publisher-name><![CDATA[Addison-Wesley Publishing Company]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B3">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dirac]]></surname>
<given-names><![CDATA[P.A.M]]></given-names>
</name>
</person-group>
<source><![CDATA[The Principles of Quantum Mechanics]]></source>
<year>1947</year>
<edition>Third</edition>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[Oxford Uiversity Press]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Friedman]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
</person-group>
<source><![CDATA[Principles and Techniques of Applied Mathematics]]></source>
<year>1956</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[John Wiley & Sons, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B5">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Goertzel]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Tralli]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<source><![CDATA[Some Math e matical Methods of Physics]]></source>
<year>1960</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[McGraw-Hill Book Company, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B6">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hadley]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
</person-group>
<source><![CDATA[Álgebra lineal]]></source>
<year>1969</year>
<publisher-loc><![CDATA[Bogotá ]]></publisher-loc>
<publisher-name><![CDATA[Fondo Educativo Interamericano, S.A]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B7">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hildebrand]]></surname>
<given-names><![CDATA[F.B]]></given-names>
</name>
</person-group>
<source><![CDATA[Methods of Applied Mathematics]]></source>
<year>1952</year>
<publisher-loc><![CDATA[^eNJ NJ]]></publisher-loc>
<publisher-name><![CDATA[Prentice-Hall, IncEnglewood Cliffs]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B8">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Smirnov]]></surname>
<given-names><![CDATA[V.I]]></given-names>
</name>
</person-group>
<source><![CDATA[Linear Algebra and Group Theory]]></source>
<year>1970</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[. Dover Publications, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B9">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Westlake]]></surname>
<given-names><![CDATA[J.R]]></given-names>
</name>
</person-group>
<source><![CDATA[A Handbook of Numer ical Matrix Inversion and Solution of Linear Equations]]></source>
<year>1968</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[John Wiley & Sons, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B10">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Wolfram]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Mathematica: A System for Doing Mathematics by Computer]]></source>
<year>1991</year>
<edition>Second</edition>
<publisher-loc><![CDATA[Redwood City^eCA CA]]></publisher-loc>
<publisher-name><![CDATA[Addison-Wesley Publishing Company, Inc]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B11">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zadeh]]></surname>
<given-names><![CDATA[L.A]]></given-names>
</name>
<name>
<surname><![CDATA[Desoer]]></surname>
<given-names><![CDATA[C.A]]></given-names>
</name>
</person-group>
<source><![CDATA[Linear System Theory: The State Space Approach]]></source>
<year>1963</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[McGraw-Hill Book Company, Inc]]></publisher-name>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
