AI Prompt Engineering
AI Prompt Engineering is a governance-first laboratory for designing, testing, and reviewing prompt systems as institutional control mechanisms— not as clever wording tricks. In professional settings, bad prompting is not merely a quality issue. It is a control failure: instructions blur with assumptions, evidence boundaries erode, output schemas drift, and persuasive prose can conceal the absence of verification.
This repository treats prompting as an engineering discipline. A prompt is not decoration. It is a contract: explicit objective, explicit boundary, explicit schema, explicit refusal logic, explicit termination, explicit artifacts. The purpose is not to “get better answers.” The purpose is to create reviewable prompt pipelines that a human can inspect, reproduce, challenge, and improve without relying on model mystique.
Each notebook implements one governed prompting pattern—bounded drafting, schema-first generation, context packing, multi-pass critique and revision, tool-call discipline, and promotion gates. Outputs are marked Not verified unless independently validated by qualified humans. That is not legal wallpaper. It is the operating posture of the entire project.
Better prompting means clearer boundaries, safer failure modes, stricter evidence separation, stronger reproducibility, and cleaner handoff to human review.
Notebook path convention: /notebooks/CHAPTER%20N.ipynb with URL-encoded spaces.
The Book
The book is the governing specification for the repository. It defines what counts as a sound prompt, what counts as a failure mode, and what evidence is required before a prompt can be promoted. The notebooks are operational laboratories: they instantiate the prompt contracts, execute them under bounded conditions, and produce artifacts that make the run auditable.
AI Prompt Engineering Book (PDF)
Covers prompt contracts, role boundaries, schema-first prompting, refusal logic, bounded context, multi-pass review, promotion criteria, regression discipline, and artifact-based accountability.
The emphasis is institutional: prompting is treated as a governed operating layer, not a collection of one-off hacks.
Read Book (PDF)Governed Notebooks (Prompt Patterns as Control Systems)
These notebooks do not aim to show “what the model can do.” They show what the prompt system can safely force the model to do under constraint. Each chapter operationalizes a prompting pattern, introduces its control objective, exposes its failure modes, and records whether the run should be promoted, revised, or rejected.
Chapter 1 — Why Prompting Needs Governance
Establishes the central claim of the repository: prompting is a control surface. The notebook contrasts loose prompting with contract-style prompting and shows how ambiguity in objective, evidence, and output structure creates institutional risk.
Open NotebookChapter 2 — Prompt Contracts and Schema-First Outputs
Converts prompts into explicit contracts with required fields, disallowed behavior, and machine-checkable output schemas. The goal is deterministic reviewability: facts, assumptions, open items, and draft outputs must be structurally separable.
Open NotebookChapter 3 — Context Packing and Evidence Boundaries
Focuses on what enters the context window and what must remain outside it. The notebook treats inclusion and exclusion as first-class controls, with evidence packets, omission logs, and explicit handling of missing or conflicting information.
Open NotebookChapter 4 — Multi-Pass Prompting: Draft, Critique, Revise
Implements structured multi-pass prompting so that drafting, critique, and revision are separated into governed stages. The system is evaluated not by fluency, but by whether critique identifies unsupported claims and revision resolves them without hidden drift.
Open NotebookChapter 5 — Prompt Hardening and Regression Discipline
Treats prompt improvement as change management. Revised prompts must be compared against prior versions under explicit tests, with failure logging, promotion thresholds, and regression-safe acceptance logic.
Open NotebookChapter 6 — Tool-Call and Process Discipline
Constrains how prompts instruct the model to use tools, retrieve data, or stop. The notebook emphasizes explicit sequencing, bounded permissions, and clean separation between generation, lookup, validation, and escalation.
Open NotebookChapter 7 — Role-Based Prompting and Institutional Voice
Explores how prompts impose role, audience, scope, and tone without collapsing into empty theatrics. The question is not whether the model sounds professional, but whether it respects mandate boundaries and preserves uncertainty under the assigned role.
Open NotebookChapter 8 — Evaluation Rubrics and Promotion Gates
Defines what counts as acceptable prompt performance. Rubrics, scorecards, and decision gates convert subjective “this feels better” judgments into explicit criteria for advance, revise, or reject decisions.
Open NotebookChapter 9 — Failure Modes, Refusal, and Safe Termination
Shows that a strong prompt system must know when to stop. Refusal, escalation, incomplete output, and verification task lists are treated as valid and often preferable outcomes when conditions are not satisfied.
Open NotebookChapter 10 — From Craft to Prompt Operations
Integrates the book into an operational model: versioning, testing, promotion, documentation, artifact retention, and disciplined human oversight. Prompting becomes an institutional workflow rather than an individual trick.
Open NotebookWhat Every Run Produces (Minimum Artifact Standard)
Every governed notebook produces a reproducibility and review bundle. The point is not merely to save outputs. The point is to make the prompt run inspectable: what prompt was used, what evidence was passed, what constraints were active, what failed, and why the run was accepted or rejected.
Run Manifest
run_manifest.json records run identifiers, configuration,
environment fingerprint, and control settings so the experiment can be reproduced
instead of merely remembered.
Reproducibility is the difference between a prompt anecdote and a prompt system.
Prompt Trace
prompts_log.jsonl or equivalent records the governed prompt flow:
system instructions, user packet structure, critique prompts, revision prompts,
and any hashes or redacted copies needed for audit.
A prompt you cannot inspect is a prompt you cannot govern.
Final State + Decision
final_state.json captures the terminal state of the run.
decision.json records whether the prompt version should
advance, revise, or reject.
Acceptance must follow criteria, not enthusiasm.
Risk Log + Deliverables
risk_log.json records observed risks, triggered controls,
and unresolved issues. artifacts/ stores reports,
scorecards, tables, or other outputs created during the run.
Missing evidence should generate review tasks, not fabricated confidence.
Shared Governance Spine
Prompts Are Contracts
Each prompt must define objective, scope, output shape, prohibited behavior, and termination logic. Vague prompting is treated as uncontrolled prompting.
See Governance SpecFacts vs Assumptions vs Open Items
Governed prompting requires structural separation of provided facts, generated assumptions, and unresolved questions. Silent inference creep is a core failure mode.
See Schema DisciplineStop is a Success Condition
Strong prompt systems refuse, escalate, or terminate early when evidence, scope, or validation conditions are not satisfied. More text is not always better output.
See Safe TerminationWho This Repository Is For
MBA / Master of Finance Cohorts
For students who need to understand prompting as a professional discipline: explicit contracts, bounded evidence, structured outputs, and reviewable failure modes.
Finance and Corporate Practitioners
For analysts, strategists, and decision-support teams who need prompt systems that produce accountable drafts rather than polished but unverifiable narrative.
Researchers and Builders
For anyone designing prompt pipelines that must survive institutional review, version control, regression testing, and real operational scrutiny.
Licensing, Governance & AI Use Disclosure
Educational / Non-Reliance: All materials are provided for educational and research purposes only. Nothing in this repository constitutes investment, trading, legal, tax, accounting, audit, or compliance advice.
Not verified: Unless explicitly stated otherwise in a specific artifact, treat all outputs, claims, calculations, citations, summaries, classifications, and conclusions as Not verified.
Confidentiality and data hygiene: Do not paste confidential, proprietary, regulated, or personally identifying information into external systems. Use redaction, anonymization, and minimum-necessary input discipline by default.
No fabricated sources or claims: Zero tolerance for invented citations, unsupported numbers, fabricated policies, fictional terms, or ungrounded conclusions. When evidence is missing, the correct output is a verification task list.
License: This project is released under the MIT License. Preserve copyright and license notices.
Copyright (c) 2026 Alejandro Reynoso