JulienBeaulieu
  • Introduction
  • Sciences
    • Math
      • Probability
        • Bayes Rule
        • Binomial distribution
        • Conditional Probability
      • Statistics
        • Descriptive Statistics
        • Inferential Statistics
          • Normal Distributions
          • Sampling Distributions
          • Confidence Intervals
          • Hypothesis Testing
          • AB Testing
        • Simple Linear Regression
        • Multiple Linear Regression
          • Statistical learning course
          • Model Assumptions And How To Address Each
        • Logistic Regression
      • Calculus
        • The big picture of Calculus
          • Derivatives
          • 2nd derivatives
          • The exponential e^x
        • Calculus
        • Gradient
      • Linear Algebra
        • Matrices
          • Matrix Multiplication
          • Inverses and Transpose and permutations
        • Vector Space and subspaces
        • Orthogonality
          • Orthogonal Sets
          • Projections
          • Least Squares
        • Gaussian Elimination
    • Programming
      • Command Line
      • Git & GitHub
      • Latex
      • Linear Algebra
        • Element-wise operations, Multiplication Transpose
      • Encodings and Character Sets
      • Uncategorized
      • Navigating Your Working Directory and File I/O
      • Python
        • Problem Solving
        • Strings
        • Lists & Dictionaries
        • Storing Data
        • HTTP Requests
      • SQL
        • Basic Statements
        • Entity Relationship Diagram
      • Jupyter Notebooks
      • Data Analysis
        • Data Visualization
          • Data Viz Cheat Sheet
          • Explanatory Analysis
          • Univariate Exploration of Data
            • Bar Chart
            • Pie Charts
            • Histograms
            • Kernel Density Estimation
            • Figures, Axes, and Subplots
            • Choosing a Plot for Discrete Data
            • Scales and Transformations (Log)
          • Bivariate Exploration of Data
            • Scatterplots
            • Overplotting, Transparency, and Jitter
            • Heatmaps
            • Violin & Box Plots
            • Categorical Variable Analysis
            • Faceting
            • Line Plots
            • Adapted Bar Charts
            • Q-Q, Swarm, Rug, Strip, Stacked, and Rigeline Plots
          • Multivariate Exploration of Data
            • Non-Positional Encodings for Third Variables
            • Color Palettes
            • Faceting for Multivariate Data
            • Plot and Correlation Matrices
            • Other Adaptations of Bivariate PLots
            • Feature Engineering for Data Viz
        • Python - Cheat Sheet
    • Machine Learning
      • Courses
        • Practical Deep learning for coders
          • Convolutional Neural Networks
            • Image Restauration
            • U-net
          • Lesson 1
          • Lesson 2
          • Lesson 3
          • Lesson 4 NLP, Collaborative filtering, Embeddings
          • Lesson 5 - Backprop, Accelerated SGD
          • Tabular data
        • Fast.ai - Intro to ML
          • Neural Nets
          • Business Applications
          • Class 1 & 2 - Random Forests
          • Lessons 3 & 4
      • Unsupervised Learning
        • Dimensionality Reduction
          • Independant Component Analysis
          • Random Projection
          • Principal Component Analysis
        • K-Means
        • Hierarchical Clustering
        • DBSCAN
        • Gaussian Mixture Model Clustering
        • Cluster Validation
      • Preprocessing
      • Machine Learning Overview
        • Confusion Matrix
      • Linear Regression
        • Feature Scaling and Normalization
        • Regularization
        • Polynomial Regression
        • Error functions
      • Decision Trees
      • Support Vector Machines
      • Training and Tuning
      • Model Evaluation Metrics
      • NLP
      • Neural Networks
        • Perceptron Algorithm
        • Multilayer Perceptron
        • Neural Network Architecture
        • Gradient Descent
        • Backpropagation
        • Training Neural Networks
  • Business
    • Analytics
      • KPIs for a Website
  • Books
    • Statistics
      • Practice Statistics for Data Science
        • Exploring Binary and Categorical Data
        • Data and Sampling Distributions
        • Statistical Experiments and Significance Testing
        • Regression and Prediction
        • Classification
        • Correlation
    • Pragmatic Thinking and Learning
      • Untitled
    • A Mind For Numbers: How to Excel at Math and Science
      • Focused and diffuse mode
      • Procrastination
      • Working memory and long term memory
        • Chunking
      • Importance of sleeping
      • Q&A with Terrence Sejnowski
      • Illusions of competence
      • Seeing the bigger picture
        • The value of a Library of Chunks
        • Overlearning
Powered by GitBook
On this page

Was this helpful?

  1. Books
  2. A Mind For Numbers: How to Excel at Math and Science
  3. Seeing the bigger picture

The value of a Library of Chunks

PreviousSeeing the bigger pictureNextOverlearning

Last updated 5 years ago

Was this helpful?

The ability to combine chunks in new and original ways underlies a lot of historical innovation. Bill Gates and other industry leaders, set aside extended, week-long reading periods so that they can hold many and varied ideas in mind during one time. This helps generate their own innovative thinking by allowing fresh in mind, not yet forgotten ideas to network amongst themselves.

Basically, what people do to enhance their knowledge and gain expertise, is to gradually build the number of chunks in their mind, valuable bits of information they can piece together in new and creative ways. Chess masters for example, can easily access thousands of different chess patterns. Musicians, linguists and scientists, can each access similar chunks of knowledge in their own disciplines.

The bigger and more well-practiced your chunked mental library, whatever the subject you're learning, the more easily you'll be able to solve problems and figure out solutions.

Chunks can be transfered from one area to the other. Concepts in one area can be useful in other areas.

If you have a library of concepts and solutions internalized as chunked patterns, you can think of it as a collection or a library of neural patterns. When you're trying to figure something out, if you have a good library of these chunks, you can more easily skip to the right solution by metaphorically speaking, listening to whispers from your diffuse mode. Your diffuse mode can help you connect two or more chunks together in new ways to solve novel problems.

Another way to think of it is this, as you build each chunk it is filling in a part of your larger knowledge picture, but if you don't practice with your growing chunks, they can remain faint and it's harder to put together the big picture of what you're trying to learn. In building a chunked library, you're training your brain to recognize not only a specific concept, but different types and classes of concepts so that you can automatically know how to solve quickly or handle whatever you encounter.

Problem solving

second, through a more holistic intuition. Sequential thinking where each small step leads deliberately towards a solution, involves the focused mode. Intuition on the other hand, often seems to require this creative diffuse mode linking of several seemingly different focused mode thoughts.

You may think there are so many problems and concepts just in a single section or chapter of whatever you're studying. There's just no way to learn them all. This is where the law of Serendipity comes into play. Lady Luck favors the one who tries. Just focus on whatever section you're studying. You'll find that once you put that first problem or concept in your mental library, whatever it is, then the second concept will go in a little more easily and the third more easily still.