AI-101

Machine Learning and Artificial Intelligence Foundations · Mechanics-First Laboratory

AI-101 is a structured, book-and-notebook learning environment for studying machine learning and artificial intelligence through mechanism rather than mystique. The goal is not to produce decorative demos or vague “AI familiarity.” The goal is to make models operationally understandable: what they optimize, how they learn, why architectures differ, where performance comes from, and where limitations begin.

This repository treats machine learning systems as explicit computational objects with data, parameters, architectures, losses, optimization dynamics, inference behavior, and failure modes. Instead of hiding that structure behind polished summaries, it makes the structure visible through chapter-aligned notebooks and a companion book that explains the conceptual progression from foundational learning systems to deep architectures, generative systems, probabilistic models, graph learning, and evolutionary search.

The repository is designed for students, MBA and Master of Finance cohorts, professional practitioners, and independent learners who want a serious introduction to AI foundations without marketing language, black-box pedagogy, or performative complexity. The book provides the conceptual spine. The notebooks provide the executable laboratories. Together they create a mechanics-first pathway into machine learning literacy.

Core premise: Mechanism first. Interpretation second. Hype never.
A model is not understood because it has a name. It is understood when its architecture, learning behavior, assumptions, and limits can be explained and observed.

Repository structure matches the current repo layout: /book/, /notebooks/, and README.md.

The Book

The book is the conceptual framework of the repository. It explains why each model family matters, what problem it was designed to solve, how its internal mechanism works, how optimization interacts with its structure, what kinds of data it assumes, and why each chapter leads naturally into the next.

Book Folder

The book materials sit in /book/. This is the narrative and theoretical spine of the project: machine learning foundations, deep architectures, generative systems, probabilistic models, relational learning, and search-based optimization.

The book is meant to be read as a cumulative learning journey rather than a disconnected catalog of techniques.

Open Book Folder
Pedagogical role: the book explains the theory and sequence; the notebooks let the learner observe the mechanism in action.

The Notebooks

The notebooks are the executable laboratories of AI-101. Each notebook corresponds to a chapter and turns that chapter’s concepts into runnable experiments. Instead of merely describing a model family, the notebook defines it, trains it, visualizes it, and exposes the user to its behavior under explicit settings.

Controlled Experiments

Each notebook is designed to make the structure of learning visible: data generation or preparation, model definition, optimization, evaluation, plotting, and interpretation of results.

The learner sees not only what the model can do, but also how it fails, saturates, overfits, generalizes, or responds to changes in hyperparameters and architecture.

Chapter Alignment

The notebooks are not generic AI demos. They are organized to align directly with the chapter sequence of the book so that conceptual explanation and executable evidence remain tightly connected.

This keeps the learning process cumulative and coherent rather than fragmented across unrelated examples.

Reproducible Learning

Where appropriate, notebooks use explicit setup steps, controlled randomness, synthetic or structured data, and visible diagnostics so results can be rerun, inspected, and compared.

The goal is not theatrical output. The goal is repeatable understanding.

Chapter Sequence

The repository is organized around 10 chapters, each representing a major family of ideas in machine learning and artificial intelligence. The sequence is cumulative: each chapter resolves limitations left partially open by the previous one and expands the learner’s understanding of what learning systems can represent.

Chapter 1 — Foundations of Learning Systems

Introduces the core logic of machine learning: data, parameters, loss, optimization, prediction, and generalization. This is the conceptual floor for the entire repository.

Open Notebook

Chapter 2 — Dense Neural Networks

Studies fully connected networks, nonlinear activation, backpropagation, and the first major step beyond linear models into flexible parametric function approximation.

Open Notebook

Chapter 3 — Convolutional Neural Networks

Introduces locality, receptive fields, and weight sharing, showing why architecture matters when the data has spatial structure such as images or grid-like signals.

Open Notebook

Chapter 4 — Recurrent Models and Sequential Structure

Focuses on temporal ordering, hidden state, memory, and the challenge of learning dependencies over sequences.

Open Notebook

Chapter 5 — Autoencoders and Representation Learning

Explores latent structure, compression, reconstruction, and the logic of learning informative internal encodings.

Open Notebook

Chapter 6 — Transformers and Attention

Explains attention as learned relational weighting across a sequence and develops intuition for contextual interaction without recurrence.

Open Notebook

Chapter 7 — Generative Adversarial Networks

Introduces adversarial learning through generator-discriminator competition and demonstrates how models can learn to generate data-like outputs.

Open Notebook

Chapter 8 — Variational and Probabilistic Deep Learning

Moves beyond deterministic mappings to uncertainty-aware models, latent distributions, sampling, and probabilistic interpretation.

Open Notebook

Chapter 9 — Graph Neural Networks

Shifts the learning setting to relational systems, showing how information propagates across nodes and edges in non-Euclidean structures.

Open Notebook

Chapter 10 — Evolutionary and Search-Based Learning

Broadens the optimization perspective beyond gradients, exploring mutation, selection, population-based search, and adaptive optimization logic in complex spaces.

Open Notebook
Learning invariant: each chapter is meant to show not only what a model family can do, but what assumptions it makes, what structure it encodes, and what limitations it carries.

What This Repository Teaches

Across the full project, AI-101 is designed to teach a particular habit of mind: not “the model gave an answer,” but “under these assumptions, with this architecture, trained on this data, the model behaved in this way.”

Training vs Inference

Learners see clearly that training is parameter adjustment under a loss, while inference is the application of an already learned mapping. This distinction is foundational and often blurred in casual AI discussion.

Architecture Matters

Dense nets, CNNs, recurrent models, transformers, graph networks, and evolutionary methods are not merely different names. They encode different assumptions about data and computation.

Failure Is Part of the Subject

Overfitting, instability, poor inductive bias, optimization difficulty, sensitivity to noise, and structural mismatch are treated as central educational objects rather than side comments.

Recommended Way to Use the Repository

•••••

Start with the relevant chapter materials in /book/, then open the corresponding notebook in /notebooks/. Run the notebook once exactly as written. After that, change one thing at a time: learning rate, number of epochs, hidden width, noise level, sample size, or model depth. Observe what changes and what does not. The point is not to collect outputs. The point is to build mechanism-level intuition.

This repository works best when treated as a laboratory rather than a slideshow. Understanding grows when the learner sees how the system behaves under controlled perturbation.

Repository Structure

Current Layout

The repository currently follows a deliberately simple structure: /book/ for the conceptual framework, /notebooks/ for chapter-aligned Colab laboratories, and README.md for orientation.

This keeps the public-facing educational pathway clean and easy to navigate.

Book URL Pattern

Book materials live under: https://github.com/alexdibol/ai-101/tree/main/book

If a compiled PDF is added later, it will follow the standard pattern: /book/<BOOK_FILENAME>.pdf

Notebook URL Pattern

Chapter notebooks live under: https://github.com/alexdibol/ai-101/tree/main/notebooks

Individual chapters follow the pattern: /notebooks/CHAPTER%20N.ipynb

Educational Positioning

AI-101 is an educational and research repository. It is not a production platform, not a commercial performance claim, and not a substitute for domain validation.

Nothing in this repository constitutes investment advice, trading advice, legal advice, tax advice, accounting advice, audit advice, regulatory advice, or professional certification of any kind.

Any real-world use of machine learning or artificial intelligence systems requires domain-appropriate validation, professional review, and human accountability.

License: MIT License
Copyright (c) 2026 Alejandro Reynoso