JulienBeaulieu
  • Introduction
  • Sciences
    • Math
      • Probability
        • Bayes Rule
        • Binomial distribution
        • Conditional Probability
      • Statistics
        • Descriptive Statistics
        • Inferential Statistics
          • Normal Distributions
          • Sampling Distributions
          • Confidence Intervals
          • Hypothesis Testing
          • AB Testing
        • Simple Linear Regression
        • Multiple Linear Regression
          • Statistical learning course
          • Model Assumptions And How To Address Each
        • Logistic Regression
      • Calculus
        • The big picture of Calculus
          • Derivatives
          • 2nd derivatives
          • The exponential e^x
        • Calculus
        • Gradient
      • Linear Algebra
        • Matrices
          • Matrix Multiplication
          • Inverses and Transpose and permutations
        • Vector Space and subspaces
        • Orthogonality
          • Orthogonal Sets
          • Projections
          • Least Squares
        • Gaussian Elimination
    • Programming
      • Command Line
      • Git & GitHub
      • Latex
      • Linear Algebra
        • Element-wise operations, Multiplication Transpose
      • Encodings and Character Sets
      • Uncategorized
      • Navigating Your Working Directory and File I/O
      • Python
        • Problem Solving
        • Strings
        • Lists & Dictionaries
        • Storing Data
        • HTTP Requests
      • SQL
        • Basic Statements
        • Entity Relationship Diagram
      • Jupyter Notebooks
      • Data Analysis
        • Data Visualization
          • Data Viz Cheat Sheet
          • Explanatory Analysis
          • Univariate Exploration of Data
            • Bar Chart
            • Pie Charts
            • Histograms
            • Kernel Density Estimation
            • Figures, Axes, and Subplots
            • Choosing a Plot for Discrete Data
            • Scales and Transformations (Log)
          • Bivariate Exploration of Data
            • Scatterplots
            • Overplotting, Transparency, and Jitter
            • Heatmaps
            • Violin & Box Plots
            • Categorical Variable Analysis
            • Faceting
            • Line Plots
            • Adapted Bar Charts
            • Q-Q, Swarm, Rug, Strip, Stacked, and Rigeline Plots
          • Multivariate Exploration of Data
            • Non-Positional Encodings for Third Variables
            • Color Palettes
            • Faceting for Multivariate Data
            • Plot and Correlation Matrices
            • Other Adaptations of Bivariate PLots
            • Feature Engineering for Data Viz
        • Python - Cheat Sheet
    • Machine Learning
      • Courses
        • Practical Deep learning for coders
          • Convolutional Neural Networks
            • Image Restauration
            • U-net
          • Lesson 1
          • Lesson 2
          • Lesson 3
          • Lesson 4 NLP, Collaborative filtering, Embeddings
          • Lesson 5 - Backprop, Accelerated SGD
          • Tabular data
        • Fast.ai - Intro to ML
          • Neural Nets
          • Business Applications
          • Class 1 & 2 - Random Forests
          • Lessons 3 & 4
      • Unsupervised Learning
        • Dimensionality Reduction
          • Independant Component Analysis
          • Random Projection
          • Principal Component Analysis
        • K-Means
        • Hierarchical Clustering
        • DBSCAN
        • Gaussian Mixture Model Clustering
        • Cluster Validation
      • Preprocessing
      • Machine Learning Overview
        • Confusion Matrix
      • Linear Regression
        • Feature Scaling and Normalization
        • Regularization
        • Polynomial Regression
        • Error functions
      • Decision Trees
      • Support Vector Machines
      • Training and Tuning
      • Model Evaluation Metrics
      • NLP
      • Neural Networks
        • Perceptron Algorithm
        • Multilayer Perceptron
        • Neural Network Architecture
        • Gradient Descent
        • Backpropagation
        • Training Neural Networks
  • Business
    • Analytics
      • KPIs for a Website
  • Books
    • Statistics
      • Practice Statistics for Data Science
        • Exploring Binary and Categorical Data
        • Data and Sampling Distributions
        • Statistical Experiments and Significance Testing
        • Regression and Prediction
        • Classification
        • Correlation
    • Pragmatic Thinking and Learning
      • Untitled
    • A Mind For Numbers: How to Excel at Math and Science
      • Focused and diffuse mode
      • Procrastination
      • Working memory and long term memory
        • Chunking
      • Importance of sleeping
      • Q&A with Terrence Sejnowski
      • Illusions of competence
      • Seeing the bigger picture
        • The value of a Library of Chunks
        • Overlearning
Powered by GitBook
On this page

Was this helpful?

  1. Sciences
  2. Math
  3. Linear Algebra
  4. Orthogonality

Orthogonal Sets

PreviousOrthogonalityNextProjections

Last updated 5 years ago

Was this helpful?

Orthogonal sets

Definition: orthogonal vectors mean they are perpendicular. If this is true then they are orthogonal. Intro:

The dot product of orthogonal vectors equals zero. All of a sudden it clicked when I remembered my conclusion as to what a dot product actually was, that is, "what amount of one vector goes in the direction of another." Basically, if vectors are orthogonal, then no amount of one will go in the direction of the other. Like how a tree casts no shadow at noon.

it's a row x a column.

We're going to show that the angle between the subspaces are orthogonal.

We know that x^2 + y^2 = (x+y)^2 thanks to pythagore.

Say we have x, we find an orthogonal vector y, and we know that is true because:

This works because: (the following works only when we have a right triangle).

everything cancels and so

Therefore the dot product of orthogonal vector = 0.

Definition Orthogonal Subspaces;

This means that if 2 vectors meet or intersect, for sure they are not orthogonal. The wall and the floor, if we think of them as subspaces are not orthogonal.

Now why is the row space orthogonal to the null space? (as per the screenshot above?)

Well, our nullspace is Ax=0, which means we have

And since each row x X = 0, they are orthogonal. Also we know that x is orthogonal to all the rowspace, meaning orthogonal to row1 x c1, row2 x c2, etc, which is true.

The same is true that the transpose of the nullspace of A is orthogonal to the columnspace of A.

Example: in 3 dimensions, if we have this line A:

Its dimension = 1, and threfore the dimension of the nullspace is 2. Therefore, the line A is orthogonal to the plane N(A). We couldn't have had just 2 lines in R^3 being orthogonal, because the dimensions don't add up to 3.

The main rule is that

Solving Ax=b when there are no solutions

In real life data, sometimes there will be noise in b and we have too many equations.

Pulse rate: you measure it multiple times.

We can't expect to solve Ax=b and have it exactly right, there's errors, mistakes in b. But there's a lot of info in x. Let's try to separate the noise from the X. What's the best solution? What do we do when elimination will tell you there are no solutions?

It's interesting to look at AT x A matrix. What do we know about it? it's square, symetric:

We also want to know is it invertable? If not what's its nullspace?

the central formula we'll want to solve is this one: with an X-hat that we hope will have a solution.

That's why we're interested in AT A and its invertability. So when is it invertable?

A A T is invertable exactly if A has independant columns, which means N(A) = 0.