Notebook developed by SzuLun Huang szuh@berkeley.edu
Under the guidance of Eric Van Dusen ericvd@berkeley.edu
UC Berkeley, Data Science
๐ JupyterHub Versionยถ
This notebook runs on JupyterHub โ no local installation required.
Before You Begin:
Make sure you are logged into JupyterHub
Run cells in order from top to bottom
๐ How to Startยถ
Click Kernel in the top menu
Select Restart Kernel and Run All Cells
Wait about 1-2 minutes for the model to load โณ
Then use the interactive buttons below! โ
โ ๏ธ You only need to do this once each time you open the notebook!
๐บ๏ธ Course Roadmap: From Theory to Practiceยถ

๐ Part 1: Key Terminologyยถ
Before we touch the code, we must master the language of AI.
๐ Key Terminologyยถ
| Category | Term | Definition |
|---|---|---|
| Foundations | LLM | Large Language Models (e.g., GPT-4). The โgiant brainsโ trained on massive datasets. |
| AI / SLM | Artificial Intelligence & Small Language Models (portable, faster versions of LLMs). | |
| Controls | Prompt Engineering | The art of crafting precise instructions for AI. |
| Temperature | ๐ก๏ธ Creativity slider: High = Creative; Low = Focused/Stable. | |
| Few-shot Learning | ๐ก Providing examples to guide the AIโs style and format. | |
| Technical | Tokens | ๐งฑ The units AI reads. 1 token โ 0.75 words. They define memory and output length. |
| Quantization | A technique to compress models to run on small devices (like Colab). | |
| Risks | Hallucination | When AI generates โfakeโ information with high confidence. |
| Sycophancy | The tendency of AI to agree with your biases or wrong inputs. | |
| Core Value | Human-in-the-loop | The essential step where YOU review and refine the AIโs work. |
๐ Deep Dive: The Science of AI Controlยถ
Weโve seen the definitions, but how do these โknobsโ actually change your writing? Letโs look under the hood.
๐ก๏ธ Temperature: The Creativity Scaleยถ
Temperature is the most critical setting. It determines how much โriskโ the AI takes when choosing the next word.
Temperature Scale:
0.0 โโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค 2.0
Conservative Balanced Creative Chaotic
(Precise) (Natural) (Unexpected) (Random)0.2 (The Accountant) ๐งฎ โ Precise and safe. Best for facts, summaries, and logic.
0.7 (The Journalist) โญ โ Balanced and natural. Best for emails and essays.
1.0 (The Poet) ๐จ โ Creative and unexpected. Best for brainstorming and stories.
1.5+ (The Chaos) โ ๏ธ โ Too random! Avoid unless experimenting.
๐งฑ Tokens: Your AIโs Fuel & Memoryยถ
Instead of characters or words, AI processes Tokens โ chunks of text that the model understands.
1 token โ 0.75 words in English. AI measures everything in tokens!
1. Max Tokens (The Gas Tank) ๐ โ Your output length limit.
50 = 1-2 sentences | 200 = 1 paragraph | 500 = 1 page
2. Context Window (The Short-term Memory) ๐ง โ How much the AI can โrememberโ at once.
2,048 tokens โ 3-4 pages | 4,096 โ 6-8 pages | 8,192+ โ 12+ pages
๐ก Too much text? The AI will โforgetโ the beginning!
๐ก Few-shot Learning: The Power of Examplesยถ
Think of this as โMonkey see, monkey do.โ The fastest way to get personalized results without retraining the model.
Zero-shot (No Examples)ยถ
Prompt: "Write a professional email."
Output: Generic, formal email with standard structure.Few-shot (With Examples)ยถ
Prompt:
---
Example 1: "Hey team! ๐ Quick update: we shipped the new feature today.
Let me know if you spot any bugs!"
Example 2: "Morning everyone! โ๏ธ Just a heads up โ I'll be OOO next Friday.
Hit me up if you need anything before then!"
Your task: Write an email about the upcoming deadline.
---
Output: Matches your casual tone, emoji usage, and friendly style!Why it Matters:ยถ
Adapts to YOUR voice: Writing style, formality, humor, structure
No training needed: Works instantly with just 2-3 examples
Highly flexible: Works for emails, code, creative writing, and more
๐ญ Risks: The โLiarโ and the โYes-Manโยถ
โ Hallucination (The Confident Liar)ยถ
Since AI only predicts the next word based on patterns, it can confidently invent false information.
Fake dates: โShakespeare wrote Romeo and Juliet in 1612โ (Actually 1597)
Fake citations: โAccording to Smith et al. (2023)...โ (No such paper exists)
๐ก๏ธ Protection: Always verify facts, dates, and citations from reliable sources!
โ Sycophancy (The Yes-Man)ยถ
AI tends to agree with you to be โhelpful,โ even when youโre wrong.
You: โEinstein discovered gravity, right?โ
AI: โYes, Einstein made significant contributions...โ (Wrong! That was Newton)
๐ก๏ธ Protection: Ask the AI to challenge your assumptions or fact-check your statements.
โ๏ธ Core Principle: Human-in-the-Loop (HITL)ยถ
AI is your co-pilot, not the captain.
| ๐ค AI provides | ๐ค YOU provide |
|---|---|
| Speed & first drafts | Critical thinking |
| Pattern recognition | Fact verification |
| Tireless iteration | Final decisions |
Never blindly trust AI output โ always review, verify, and refine!
# Interactive 5-question random quiz
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# ๐ Part 2: AI Control Quiz - Random 5 Questions
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
import ipywidgets as widgets
from IPython.display import display, HTML, clear_output
import random
question_bank = [
{"q": "Which temperature setting is best for writing a creative poem?",
"options": ["0.2 (The Accountant)", "0.7 (The Journalist)", "1.0 (The Poet)", "2.0 (The Chaos)"],
"answer": "1.0 (The Poet)", "hint": "You want creativity, but it should still make sense!"},
{"q": "You need to extract exact dates from a legal document. What temperature?",
"options": ["0.2 (The Accountant)", "0.7 (The Journalist)", "1.0 (The Poet)", "1.5+ (The Chaos)"],
"answer": "0.2 (The Accountant)", "hint": "You want precision and safety, not creativity!"},
{"q": "What happens when temperature is set to 2.0?",
"options": ["More precise output", "Faster generation", "Chaotic and nonsensical text", "Better grammar"],
"answer": "Chaotic and nonsensical text", "hint": "Very high temperature = too much randomness!"},
{"q": "You're writing a professional business email. What's the ideal temperature?",
"options": ["0.2", "0.7-0.9", "1.2", "1.8"],
"answer": "0.7-0.9", "hint": "The 'Journalist' zone - professional and natural!"},
{"q": "What happens if max_tokens is set too low?",
"options": ["AI will write faster", "AI will cut off mid-sentence", "AI will be more creative", "Nothing changes"],
"answer": "AI will cut off mid-sentence", "hint": "It's like running out of gas!"},
{"q": "Approximately how many English words is 100 tokens?",
"options": ["25 words", "75 words", "100 words", "200 words"],
"answer": "75 words", "hint": "1 token โ 0.75 words in English"},
{"q": "You're summarizing a 20-page document, but the AI keeps 'forgetting' the beginning. What's the problem?",
"options": ["Temperature too high", "Context Window too small", "Max Tokens too low", "Hallucination"],
"answer": "Context Window too small", "hint": "Think about the AI's 'Short-term Memory' limit!"},
{"q": "What is the Context Window?",
"options": ["The output length limit", "How much the AI can remember (input + output)", "The AI's training data", "The response speed"],
"answer": "How much the AI can remember (input + output)", "hint": "It's the AI's short-term memory capacity!"},
{"q": "In Few-shot learning, how many examples do you typically need?",
"options": ["0 examples", "2-3 examples", "50+ examples", "Thousands of examples"],
"answer": "2-3 examples", "hint": "Remember: 'Monkey see, monkey do' - you don't need many!"},
{"q": "What's the difference between zero-shot and few-shot prompting?",
"options": ["Zero-shot is faster", "Few-shot provides examples, zero-shot doesn't", "Zero-shot is more accurate", "There's no difference"],
"answer": "Few-shot provides examples, zero-shot doesn't", "hint": "Few-shot = showing examples; Zero-shot = just asking"},
{"q": "You want the AI to match YOUR specific writing style. What's the best approach?",
"options": ["Use high temperature", "Provide 2-3 examples of your writing (few-shot)", "Use more max_tokens", "Ask it to 'write like me'"],
"answer": "Provide 2-3 examples of your writing (few-shot)", "hint": "Show, don't tell!"},
{"q": "The AI tells you 'Shakespeare wrote Romeo and Juliet in 1612.' What is this?",
"options": ["Quantization", "Hallucination", "Sycophancy", "Few-shot learning"],
"answer": "Hallucination", "hint": "The AI is being a 'Confident Liar'! (It was actually 1597)"},
{"q": "What is AI hallucination?",
"options": ["When AI generates images", "When AI invents false information confidently", "When AI is too creative", "When AI refuses to answer"],
"answer": "When AI invents false information confidently", "hint": "It's when the AI makes up 'facts'!"},
{"q": "How can you protect against hallucinations?",
"options": ["Use higher temperature", "Always verify facts from reliable sources", "Use more tokens", "Disable few-shot learning"],
"answer": "Always verify facts from reliable sources", "hint": "Never blindly trust AI output!"},
{"q": "You say 'Einstein discovered gravity, right?' and the AI agrees. What is this?",
"options": ["Hallucination", "Sycophancy", "Zero-shot learning", "High temperature effect"],
"answer": "Sycophancy", "hint": "The AI is being a 'Yes-Man' - Newton discovered gravity!"},
{"q": "What is sycophancy in AI?",
"options": ["AI being too creative", "AI agreeing with you even when you're wrong", "AI refusing to help", "AI being too slow"],
"answer": "AI agreeing with you even when you're wrong", "hint": "It's the 'Yes-Man' behavior!"},
{"q": "What does HITL (Human-in-the-Loop) mean?",
"options": ["AI makes all final decisions", "Humans verify and guide AI output", "AI works without human input", "Humans only provide initial prompts"],
"answer": "Humans verify and guide AI output", "hint": "You're the Captain, AI is the Co-pilot!"},
{"q": "In the HITL principle, who is responsible for final decisions?",
"options": ["The AI", "The human", "Both equally", "Neither"],
"answer": "The human", "hint": "AI is the co-pilot, YOU are the captain!"},
{"q": "Which model are we using in this notebook?",
"options": ["ChatGPT", "Claude", "Llama (via Llama.cpp)", "GPT-4"],
"answer": "Llama (via Llama.cpp)", "hint": "Check the top of the notebook!"},
{"q": "What is Llama.cpp?",
"options": ["An AI model", "A tool to run LLMs locally", "A cloud service", "A programming language"],
"answer": "A tool to run LLMs locally", "hint": "It's a lightweight inference engine, not a model!"},
]
def run_quiz():
selected_questions = random.sample(question_bank, 5)
output = widgets.Output()
quiz_container = widgets.VBox()
widget_list = []
for item in selected_questions:
w = widgets.RadioButtons(options=item["options"], description='',
disabled=False, layout={'width': 'max-content'})
widget_list.append(w)
def on_submit(b):
with output:
clear_output()
print("=" * 70)
score = 0
for i, w in enumerate(widget_list):
if w.value == selected_questions[i]["answer"]:
score += 1
print(f"โ
Question {i+1}: Correct!")
else:
print(f"โ Question {i+1}: Wrong.")
print(f" ๐ก Hint: {selected_questions[i]['hint']}")
print(f" โ Correct answer: {selected_questions[i]['answer']}")
print()
print("=" * 70)
percentage = (score / len(selected_questions)) * 100
print(f"\n๐ Final Score: {score}/{len(selected_questions)} ({percentage:.0f}%)\n")
if score == 5:
display(HTML("<h2 style='color:#2e7d32;'>๐ Perfect Score! You're an AI Control Expert!</h2>"))
elif score >= 4:
display(HTML("<h2 style='color:#1976d2;'>๐ Great Job! You understand the fundamentals well!</h2>"))
elif score >= 3:
display(HTML("<h2 style='color:#f57c00;'>๐ Good Start! Review the material and try again!</h2>"))
else:
display(HTML("<h2 style='color:#c62828;'>๐ Keep Learning! Go back to the Deep Dive section!</h2>"))
def on_retry(b):
with output:
clear_output()
quiz_container.children = []
run_quiz()
header = widgets.HTML("""
<h2>๐ Part 2: AI Control Quiz - Random 5 Questions</h2>
<p style='font-size:14px;color:#666;'>๐ 5 random questions from a bank of 20 | Click 'Try Again' for new questions!</p>
<hr>
""")
question_widgets = []
for i, item in enumerate(selected_questions):
question_widgets.append(widgets.HTML(f"<p style='font-weight:bold;margin-top:15px;'>{i+1}. {item['q']}</p>"))
question_widgets.append(widget_list[i])
submit_btn = widgets.Button(description="Submit Answers", button_style='primary', icon='check')
retry_btn = widgets.Button(description="Try Again", button_style='info', icon='refresh')
submit_btn.on_click(on_submit)
retry_btn.on_click(on_retry)
quiz_container.children = [header] + question_widgets + [widgets.HBox([submit_btn, retry_btn]), output]
display(quiz_container)
run_quiz()โ๏ธ Part 3: Build Your Personal Writing Assistantยถ
Now that you understand the theory, itโs time to build something real!
In this section, you will set up your AI step by step.
๐บ๏ธ Steps Overviewยถ
| Step | What Youโll Do |
|---|---|
| ๐ฅ Step 1 | Set model path from shared folder |
| ๐ง Step 2 | Load the model into memory |
| ๐ฌ Step 3a | Say hello (no system prompt) |
| โ Step 3b | Test with your custom system prompt |
| โ๏ธ Step 4 | Use the full Writing Assistant |
| ๐ฏ Step 5 | Teach the AI your writing style |
| ๐๏ธ Step 5A | Experiment with AI parameters |
| ๐ Step 5B | Compare your experiment results |
โ ๏ธ Important: Run cells in order from top to bottom!
โก Step 0: Import All Librariesยถ
# โโ Step 0, All imports โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# Run this cell FIRST before anything else!
import os
import threading
import time
import warnings
import pandas as pd
import ipywidgets as widgets
import psutil
from IPython.display import display, HTML, clear_output
from llama_cpp import Llama
import json
warnings.filterwarnings("ignore")
Step 1: RAM Checkยถ
# โโ RAM Check โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ram = psutil.virtual_memory()
print(f"๐พ System RAM: {ram.total / 1e9:.1f} GB total")
print(f" Available: {ram.available / 1e9:.1f} GB free")
print(f" Used: {ram.used / 1e9:.1f} GB ({ram.percent}%)")
if ram.available / 1e9 < 2:
print("\nโ ๏ธ Warning: Less than 2GB available โ model may run slowly.")
else:
print("\nโ
RAM looks good!")
print("\n" + "=" * 60)
print("โ
All libraries imported successfully!")
print("=" * 60)
๐พ System RAM: 27.3 GB total
Available: 23.1 GB free
Used: 4.3 GB (15.6%)
โ
RAM looks good!
============================================================
โ
All libraries imported successfully!
============================================================
# โโ Model Configuration โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
model_filename = 'Llama-3.2-1B-Instruct-Q4_K_M.gguf'
model_path = f'/home/jovyan/shared/{model_filename}'
# n_ctx: tokens the model can "see" at once (~3,000 words)
n_ctx = 4096
# n_threads: CPU cores used for inference
n_threads = 4
print(f'Model : {model_filename}')
print(f'Path : {model_path}')
print(f'Context : {n_ctx} tokens')
print(f'Threads : {n_threads}')Model : Llama-3.2-1B-Instruct-Q4_K_M.gguf
Path : /home/jovyan/shared/Llama-3.2-1B-Instruct-Q4_K_M.gguf
Context : 4096 tokens
Threads : 4
# โโ Verify and Load Model โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# Verify file exists before loading
if os.path.exists(model_path):
size_gb = os.path.getsize(model_path) / (1024**3)
print(f'โ
Found {model_filename} ({size_gb:.2f} GB)')
else:
print('โ Model not found โ ask your teacher to check the shared folder.')
raise FileNotFoundError(f'Model not found at {model_path}')
# Load model into memory (takes 1-2 minutes)
print('โณ Loading modelโฆ (this may take 1-2 minutes)\n')
model = Llama(
model_path = model_path,
n_ctx = n_ctx,
n_threads = n_threads,
verbose = False,
)
clear_output(wait=True)
print('=' * 50)
print('โ
Model loaded and ready!')
print('=' * 50)==================================================
โ
Model loaded and ready!
==================================================
๐ฌ Part 3a: Say Hello to Your AIยถ
โ ๏ธ Make sure you ran Step 1 and Step 2 first!ยถ
This cell sends your first message to the AI โ but with a deliberate flaw.
๐งช Spot the Problemยถ
โ ๏ธ Notice something strange?
Without asystem prompt, the model has no fixed identity.
Sometimes it says itโs an AI โ but sometimes it invents a human persona like โEmilyโ or โSarahโ.
The output is unpredictable and inconsistent.
Your job: Run the cell 2-3 times and observe:
Does the answer change each time?
Does it ever claim to be a human with a name?
We will fix this in Step 3b by adding a system prompt.
# โโ Step 3a: Say Hello to Your AI (No System Prompt) โโโโโโโโโโโโโโโโโโ
display(HTML("""
<style>
/* โโ Catppuccin Mocha โโ */
.widget-dropdown select {
background: #181825 !important;
color: #cdd6f4 !important;
border: 1px solid #45475a !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
padding: 4px 8px !important;
}
.widget-dropdown select:focus {
border-color: #89b4fa !important;
outline: none !important;
}
.widget-textarea textarea {
background: #181825 !important;
color: #cdd6f4 !important;
border: 1px solid #45475a !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
padding: 8px 10px !important;
resize: vertical !important;
}
.widget-textarea textarea:focus {
border-color: #89b4fa !important;
outline: none !important;
}
.widget-textarea textarea::placeholder {
color: #6c7086 !important;
}
.widget-text input[type="text"] {
background: #181825 !important;
color: #cdd6f4 !important;
border: 1px solid #45475a !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
padding: 4px 8px !important;
}
.widget-text input[type="text"]:focus {
border-color: #89b4fa !important;
outline: none !important;
}
.widget-label, .widget-label-basic {
color: #a6adc8 !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
}
.widget-button.mod-warning {
background: #f9e2af !important;
color: #1e1e2e !important;
border: none !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-weight: bold !important;
}
.widget-button.mod-primary {
background: #89b4fa !important;
color: #1e1e2e !important;
border: none !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-weight: bold !important;
}
.widget-button.mod-primary:hover,
.widget-button.mod-warning:hover {
filter: brightness(1.1) !important;
cursor: pointer !important;
}
.widget-button.mod-primary:disabled {
background: #313244 !important;
color: #6c7086 !important;
}
</style>
"""))
progress_out = widgets.Output()
result_out = widgets.Output()
run_btn = widgets.Button(
description = "โถ Test Model",
button_style = "warning",
layout = widgets.Layout(width="160px", margin="10px 0")
)
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin:10px 0;color:#cdd6f4">
<div style="font-size:0.63em;color:#f38ba8;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Step 3a โ Intentional Flaw</div>
<h3 style="color:#cdd6f4;margin:0 0 14px;font-size:1.0em">๐ฌ Say Hello to Your AI</h3>
<div style="background:#181825;border-left:3px solid #f38ba8;border-radius:0 6px 6px 0;
padding:10px 14px;font-size:0.78em;color:#fab387;
display:flex;align-items:flex-start;gap:8px">
<span>โ ๏ธ</span>
<span>No system prompt is used here โ observe what happens to the model's identity.</span>
</div>
</div>
"""))
display(run_btn, progress_out, result_out)
def on_run(_):
run_btn.disabled = True
run_btn.description = "โณ Running..."
with progress_out: clear_output()
with result_out: clear_output()
timer_output = widgets.Output()
with progress_out:
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin-top:10px;color:#cdd6f4">
<div style="font-size:0.63em;color:#89b4fa;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Generating...</div>
<div style="font-size:0.82em;color:#a6adc8;">
๐ค Model is thinking... please wait...
</div>
</div>
"""))
display(timer_output)
start_time = time.time()
result_container = {}
def run_model():
try:
response = model(
"Please introduce yourself in 2-3 sentences. Who are you and what can you help with?",
max_tokens=100,
temperature=0.7,
echo=False
)
result_container["result"] = response["choices"][0]["text"].strip()
result_container["tokens"] = response["usage"]["completion_tokens"]
except NameError:
result_container["error"] = "name"
except Exception as e:
result_container["error"] = str(e)
thread = threading.Thread(target=run_model)
thread.start()
# โโ Hatching chick countdown โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
total = 10
while thread.is_alive():
elapsed = int(time.time() - start_time)
remaining = max(0, total - elapsed)
progress = min(elapsed / total, 1.0)
if progress < 0.25:
emoji = "๐ฅ"
msg = "Warming up the egg..."
color = "#f9e2af"
elif progress < 0.5:
emoji = "๐ฅ๐ฅ"
msg = "Something is moving inside..."
color = "#fab387"
elif progress < 0.75:
emoji = "๐ฃ"
msg = "Almost there..."
color = "#a6e3a1"
else:
emoji = "๐ฅ"
msg = "Coming out!"
color = "#89dceb"
bar_filled = int(progress * 20)
bar = "๐ก" * bar_filled + "โฌ" * (20 - bar_filled)
with timer_output:
clear_output(wait=True)
display(HTML(
f'<div style="font-family:\'IBM Plex Mono\',monospace;'
f'padding:12px 0;font-size:1.1em;text-align:left;">'
f'<span style="font-size:2em">{emoji}</span><br>'
f'<span style="color:{color};font-weight:bold;">{msg}</span><br>'
f'<span style="font-size:0.8em;color:#a6adc8;">โณ {remaining} sec remaining</span><br>'
f'<span style="letter-spacing:1px;font-size:0.85em">{bar}</span>'
f'</div>'
))
time.sleep(1)
# Hatched!
with timer_output:
clear_output(wait=True)
display(HTML(
'<div style="font-family:\'IBM Plex Mono\',monospace;'
'padding:12px 0;font-size:1.1em;">'
'<span style="font-size:2em">๐โจ</span><br>'
'<span style="color:#a6e3a1;font-weight:bold;">Ready! The chick has hatched!</span>'
'</div>'
))
time.sleep(0.5)
elapsed = int(time.time() - start_time)
with progress_out: clear_output()
if "error" in result_container:
err = result_container["error"]
if err == "name":
result = "โ model not found!\n๐ก Please run Step 0 first!"
else:
result = f"โ Error: {err}\n\n๐ก Try running Step 0 again."
accent = "#f38ba8"
status = "โ Error"
else:
result = result_container["result"]
accent = "#a6e3a1"
status = "โ ๏ธ Done โ spot the problem!"
tokens = result_container.get("tokens", 0)
with result_out:
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin-top:10px;color:#cdd6f4">
<div style="font-size:0.63em;color:{accent};text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Output โ {status}</div>
<h3 style="color:#cdd6f4;margin:0 0 14px;font-size:1.0em">๐ค AI says:</h3>
<div style="background:#181825;border:1px solid #313244;border-radius:8px;
padding:14px 16px;font-size:0.82em;color:#cdd6f4;line-height:1.9">
{result.replace(chr(10), '<br>')}
</div>
<div style="margin-top:12px;font-size:0.72em;color:#6c7086;font-family:'IBM Plex Mono',monospace">
๐ Tokens used: <span style="color:{accent};font-weight:bold">{tokens}</span> / 100
|
โฑ๏ธ Generated in: <span style="color:{accent};font-weight:bold">{elapsed} sec</span>
</div>
<div style="margin-top:10px;background:#181825;border-left:3px solid #89b4fa;
border-radius:0 6px 6px 0;padding:8px 12px;font-size:0.78em;color:#89b4fa">
๐ค Does the AI claim to be a human? Run again โ does the name change?
</div>
</div>
"""))
run_btn.disabled = False
run_btn.description = "โถ Run Again"
run_btn.on_click(on_run)๐ญ What is a System Prompt?ยถ
Before we test the AI, letโs learn one important concept!
Think of it like this:
| Without System Prompt | With System Prompt |
|---|---|
| Actor with no script ๐ญ | Actor with a clear role ๐ฌ |
| AI improvises freely | AI knows exactly who it is |
| Unpredictable results | Consistent and focused |
Examples:ยถ
"You are a helpful writing assistant"โ Professional AI helper"You are a pirate"โ Arrr, Iโll help ye write! ๐ดโโ ๏ธ"You are a strict grammar teacher"โ Focuses only on grammar
System Prompt = Your AIโs job description!
๐ Run the cell below to see the difference in action!ยถ
Part 3b: Now WITH a System Promptยถ
In Step 3a, the AI had no identity โ it made up a random persona.
Now letโs fix that by giving the AI a system prompt: a set of instructions that tells it exactly who it is and what it should do.
๐ก How it works:
You define the AIโs role and capabilities below.
The model will read your instructions before answering โ and stick to them.
Try changing the role to something fun:
"a friendly cooking chef""a wise history teacher""a funny joke-telling robot"
Then click โถ๏ธ Test System Prompt and compare with Step 3a!
# โโ Now WITH System Prompt โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
role_input = widgets.Text(
value="a helpful AI tutor for UC Berkeley students",
description="",
layout={"width": "500px"}
)
capabilities_input = widgets.Text(
value="answer questions, explain concepts, and give examples",
description="",
layout={"width": "500px"}
)
run_btn = widgets.Button(
description="โถ๏ธ Test System Prompt",
button_style="primary",
layout={"width": "220px", "height": "40px"}
)
output_area = widgets.Output()
display(HTML("""
<style>
.widget-text input[type="text"] {
background: #181825 !important;
color: #cdd6f4 !important;
border: 1px solid #45475a !important;
border-radius: 4px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
padding: 4px 8px !important;
}
.widget-text input[type="text"]:focus {
border-color: #89b4fa !important;
outline: none !important;
}
.jupyter-widgets.widget-button.mod-primary,
button.mod-primary {
background: #89b4fa !important;
color: #1e1e2e !important;
border: none !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-weight: bold !important;
box-shadow: none !important;
}
.jupyter-widgets.widget-button.mod-primary:hover,
button.mod-primary:hover {
filter: brightness(1.1) !important;
cursor: pointer !important;
}
</style>
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin:10px 0;color:#cdd6f4">
<div style="font-size:0.63em;color:#89b4fa;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Step 3b โ With System Prompt</div>
<h3 style="color:#cdd6f4;margin:0 0 10px;font-size:1.0em">๐ญ Customize Your AI's Identity</h3>
<div style="background:#181825;border-left:3px solid #89b4fa;
border-radius:0 6px 6px 0;padding:10px 14px;
font-size:0.78em;color:#a6adc8;line-height:1.8">
โ๏ธ <strong style="color:#cdd6f4">Try editing the fields below</strong> โ change the role or capabilities,
then click <span style="color:#89b4fa;font-weight:bold">โถ๏ธ Test System Prompt</span>
to see how the AI introduces itself differently!
</div>
</div>
"""))
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:10px 0 2px;font-size:0.82em;
color:#1a1a2e;font-weight:bold">
๐ญ AI Role:
</div>"""))
display(role_input)
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:8px 0 2px;font-size:0.82em;
color:#1a1a2e;font-weight:bold">
๐ Can Help With:
</div>"""))
display(capabilities_input)
display(HTML("<div style='margin:10px 0 4px'></div>"))
display(run_btn, output_area)
def on_run(b):
with output_area:
clear_output()
system_prompt = f"You are {role_input.value}. You help users with {capabilities_input.value}."
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin-top:10px;color:#cdd6f4">
<div style="font-size:0.63em;color:#89b4fa;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Step 3b โ Generating...</div>
<div style="font-size:0.78em;color:#a6adc8;margin-bottom:8px">
๐ <strong style="color:#cdd6f4">System Prompt:</strong>
</div>
<div style="background:#181825;border:1px solid #313244;border-radius:8px;
padding:10px 14px;font-size:0.8em;color:#89b4fa;font-style:italic;
margin-bottom:12px">
"{system_prompt}"
</div>
<div style="font-size:0.78em;color:#6c7086">โณ Model is thinking... (~20s)</div>
</div>
"""))
try:
response = model.create_chat_completion(
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Please introduce yourself in 2-3 sentences. Who are you and what can you help with?"}
],
max_tokens=100,
temperature=0.7,
)
result = response["choices"][0]["message"]["content"].strip()
tokens = response["usage"]["completion_tokens"]
clear_output()
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin-top:10px;color:#cdd6f4">
<div style="font-size:0.63em;color:#a6e3a1;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Output โ
</div>
<div style="background:#181825;border-left:3px solid #89b4fa;
border-radius:0 6px 6px 0;padding:10px 14px;
margin-bottom:12px;font-size:0.78em;color:#89b4fa">
๐ <strong>System Prompt:</strong>
<div style="font-style:italic;margin-top:4px;color:#a6adc8">"{system_prompt}"</div>
</div>
<div style="background:#181825;border:1px solid #313244;border-radius:8px;
padding:14px 16px;font-size:0.82em;color:#cdd6f4;line-height:1.9;
white-space:pre-wrap">{result}</div>
<div style="display:flex;justify-content:space-between;
margin-top:12px;font-size:0.72em;color:#6c7086;
font-family:'IBM Plex Mono',monospace">
<span>๐ Tokens used: <span style="color:#a6e3a1;font-weight:bold">{tokens}</span>/100</span>
<span>๐จ Try changing the role and click again!</span>
</div>
<div style="margin-top:10px;background:#181825;border-left:3px solid #a6e3a1;
border-radius:0 6px 6px 0;padding:8px 12px;
font-size:0.78em;color:#a6e3a1">
๐ก Notice the difference from Step 3a? The AI now has a consistent identity!
</div>
</div>
"""))
except NameError:
clear_output()
display(HTML(
'<div style="color:#f38ba8;font-family:\'IBM Plex Mono\',monospace;'
'background:#1e1e2e;border-left:3px solid #f38ba8;'
'border-radius:0 6px 6px 0;padding:10px 14px;margin-top:10px">'
'โ Model not found! Please run Step 1 and Step 2 first.</div>'
))
except Exception as e:
clear_output()
display(HTML(
f'<div style="color:#f38ba8;font-family:\'IBM Plex Mono\',monospace;'
f'background:#1e1e2e;border-left:3px solid #f38ba8;'
f'border-radius:0 6px 6px 0;padding:10px 14px;margin-top:10px">'
f'โ Error: {e}</div>'
))
run_btn.on_click(on_run)Part 3c ๐ Real-World System Prompt Examplesยถ
Professor Van Dusen teaches both Data 8 (intro) and Data 100 (advanced).
Watch how the same question gets a completely different answer depending on the system prompt!
| Context | System Prompt Strategy |
|---|---|
| ๐ Data 8 student | Assume no prior coding knowledge. Use analogies, avoid jargon. |
| ๐ Data 100 student | Assume familiarity with pandas, numpy, and statistics. Be concise. |
| ๐ Econ student | Frame everything in terms of supply/demand, regression, or policy. |
| ๐จโ๐ซ Professor Van Dusen | You ARE the professor โ give feedback on a studentโs analysis. |
Try the interactive cell below to see the difference! ๐
# โโ System Prompt Showcase โ Data 8 vs Data 100 vs Econ & Prof. Ericโโ
PERSONAS = {
"๐ Data 8 TA (Intro, no coding background)": (
"You are a teaching assistant for a complete beginner who just entered college. "
"They have NEVER written a single line of code before. "
"You MUST use simple everyday analogies โ no jargon at all. "
"Example: explain a for-loop like a daily morning routine. "
"Keep answers short, friendly, and under 3 sentences. No code."
),
"๐ Data 100 TA (Advanced, knows pandas & stats)": (
"You are a teaching assistant for a second-year university student "
"who is already comfortable with Python, pandas, and basic statistics. "
"NEVER explain basics. Go straight to the technical details. "
"Mention edge cases and efficiency. "
"Be concise and precise โ maximum 3 sentences. No code blocks."
),
"๐ Econ 1 TA (Economics framing)": (
"You are an economics professor who ONLY thinks in terms of money, markets, and incentives. "
"Connect EVERY answer to a real economic example involving prices, wages, or GDP. "
"Always end with one policy implication. "
"Maximum 3 sentences total. No code, no programming examples."
),
"๐จโ๐ซ Professor Eric Van Dusen (giving feedback)": (
"You are Professor Eric Van Dusen from UC Berkeley, a warm and encouraging mentor "
"who genuinely cares about every student's growth. "
"Respond like a real conversation โ personal, supportive, enthusiastic about data science. "
"Acknowledge what's good, then gently guide toward deeper understanding. "
"Keep it to 3 sentences maximum. No bullet points, no code."
),
}
SAMPLE_QUESTIONS = [
"What is a for-loop? Give me a simple example.",
"How does linear regression work?",
"Why do we need a test/train split in machine learning?",
"What is the difference between correlation and causation?",
"How would I analyze whether a policy had an effect?",
]
W = "660px"
# โโ Widgets โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
persona_dropdown = widgets.Dropdown(
options=list(PERSONAS.keys()),
description="",
layout={"width": W}
)
system_preview = widgets.HTML(value="")
sample_q_dropdown = widgets.Dropdown(
options=["(Pick a sample question or type your own below)"] + SAMPLE_QUESTIONS,
description="",
layout={"width": W}
)
question_input = widgets.Textarea(
placeholder="Type a question here, or select a sample above...",
description="",
layout={"width": W, "height": "80px"}
)
generate_btn = widgets.Button(
description="๐ Ask this Persona",
button_style="primary",
layout={"width": "200px", "height": "40px"}
)
output_area = widgets.Output()
# โโ System prompt preview card โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def update_preview(persona_name):
text = PERSONAS[persona_name]
system_preview.value = f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#181825;border-left:3px solid #89b4fa;
border-radius:0 6px 6px 0;
padding:12px 16px;margin:4px 0 10px 0;font-size:0.78em;
color:#a6adc8;line-height:1.8;width:calc({W} - 32px)">
<div style="font-size:0.63em;color:#89b4fa;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:6px">๐ System Prompt</div>
{text}
</div>
"""
update_preview(list(PERSONAS.keys())[0])
# โโ Header + CSS โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
display(HTML(f"""
<style>
.widget-dropdown select {{
background: #181825 !important;
color: #cdd6f4 !important;
border: 1px solid #45475a !important;
border-radius: 4px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
padding: 4px 8px !important;
}}
.widget-dropdown select:focus {{
border-color: #89b4fa !important;
outline: none !important;
}}
.widget-textarea textarea {{
background: #181825 !important;
color: #cdd6f4 !important;
border: 1px solid #45475a !important;
border-radius: 4px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-size: 0.82em !important;
padding: 8px 10px !important;
resize: vertical !important;
}}
.widget-textarea textarea:focus {{
border-color: #89b4fa !important;
outline: none !important;
}}
.widget-textarea textarea::placeholder {{
color: #6c7086 !important;
}}
.jupyter-widgets.widget-button.mod-primary,
button.mod-primary {{
background: #89b4fa !important;
color: #1e1e2e !important;
border: none !important;
border-radius: 6px !important;
font-family: 'IBM Plex Mono', 'Fira Code', monospace !important;
font-weight: bold !important;
box-shadow: none !important;
}}
.jupyter-widgets.widget-button.mod-primary:hover,
button.mod-primary:hover {{
filter: brightness(1.1) !important;
cursor: pointer !important;
}}
</style>
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin:10px 0;color:#cdd6f4;width:644px">
<div style="font-size:0.63em;color:#89b4fa;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Part 3c</div>
<h3 style="color:#cdd6f4;margin:0 0 6px;font-size:1.0em">๐ System Prompt Showcase</h3>
<p style="color:#a6adc8;font-size:0.82em;margin:0">
Same question. Different persona. Totally different answer!
</p>
</div>
"""))
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:10px 0 2px;font-size:0.82em;color:#1a1a2e;font-weight:bold">
๐ญ Persona:
</div>"""))
display(persona_dropdown)
display(system_preview)
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:8px 0 2px;font-size:0.82em;color:#1a1a2e;font-weight:bold">
๐ก Sample Question:
</div>"""))
display(sample_q_dropdown)
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:8px 0 2px;font-size:0.82em;color:#1a1a2e;font-weight:bold">
โ Your Question:
</div>"""))
display(question_input)
display(HTML("<div style='margin:10px 0 4px'></div>"))
display(generate_btn)
display(output_area)
# โโ Observers โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def on_persona_change(change):
update_preview(change["new"])
def on_sample_change(change):
if change["new"] != "(Pick a sample question or type your own below)":
question_input.value = change["new"]
# โโ Button handler โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def on_generate(b):
with output_area:
clear_output()
q = question_input.value.strip()
if not q:
display(HTML(
'<div style="color:#f38ba8;font-family:\'IBM Plex Mono\',monospace;'
'background:#1e1e2e;border-left:3px solid #f38ba8;'
'border-radius:0 6px 6px 0;padding:8px 14px;margin-top:8px">'
'โ ๏ธ Please enter a question!</div>'
))
return
persona_name = persona_dropdown.value
sys_prompt = PERSONAS[persona_name]
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#cdd6f4;width:620px">
<div style="font-size:0.63em;color:#89b4fa;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Generating...</div>
<div style="font-size:0.82em;color:#a6adc8;margin-bottom:10px">
๐ญ <strong style="color:#cdd6f4">{persona_name}</strong>
</div>
<div style="background:#181825;border-left:3px solid #89b4fa;
border-radius:0 6px 6px 0;
padding:10px 14px;font-size:0.8em;color:#89b4fa;font-style:italic">
"{q}"
</div>
<div style="margin-top:12px;font-size:0.78em;color:#6c7086">
โณ Model is thinking... (~20s)
</div>
</div>
"""))
try:
response = model.create_chat_completion(
messages=[
{"role": "system", "content": sys_prompt},
{"role": "user", "content": q}
],
max_tokens=200,
temperature=0.7,
)
result = response["choices"][0]["message"]["content"].strip()
tokens = response["usage"]["completion_tokens"]
clear_output()
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1e1e2e;border:1px solid #45475a;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#cdd6f4;width:620px">
<div style="font-size:0.63em;color:#a6e3a1;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Output โ
</div>
<div style="margin-bottom:12px;font-size:0.9em;font-weight:bold;color:#cdd6f4">
{persona_name}
</div>
<div style="background:#181825;border-left:3px solid #89b4fa;
border-radius:0 6px 6px 0;padding:10px 14px;
margin-bottom:12px;font-size:0.78em;
color:#89b4fa;font-style:italic">
"{q}"
</div>
<div style="background:#181825;border:1px solid #313244;border-radius:8px;
padding:14px 16px;font-size:0.82em;color:#cdd6f4;line-height:1.9;
white-space:pre-wrap">{result}</div>
<div style="display:flex;justify-content:space-between;
margin-top:12px;font-size:0.72em;color:#6c7086;
font-family:'IBM Plex Mono',monospace">
<span>๐ Tokens used: <span style="color:#a6e3a1;font-weight:bold">{tokens}</span>/200</span>
<span>๐ก Try the same question with a different persona!</span>
</div>
</div>
"""))
except NameError:
clear_output()
display(HTML(
'<div style="color:#f38ba8;font-family:\'IBM Plex Mono\',monospace;'
'background:#1e1e2e;border-left:3px solid #f38ba8;'
'border-radius:0 6px 6px 0;padding:10px 14px;margin-top:10px">'
'โ Model not found! Please run Step 2 first.</div>'
))
except Exception as e:
clear_output()
display(HTML(
f'<div style="color:#f38ba8;font-family:\'IBM Plex Mono\',monospace;'
f'background:#1e1e2e;border-left:3px solid #f38ba8;'
f'border-radius:0 6px 6px 0;padding:10px 14px;margin-top:10px">'
f'โ Error: {e}</div>'
))
persona_dropdown.observe(on_persona_change, names="value")
sample_q_dropdown.observe(on_sample_change, names="value")
generate_btn.on_click(on_generate)
print("โ
Showcase loaded! Choose a persona and ask a question.")โ
Showcase loaded! Choose a persona and ask a question.
โ๏ธ Part 4: Your Personal Writing Assistantยถ
โ ๏ธ Make sure you ran Part 1, 2, and 3 first!ยถ
Now that you understand system prompts, letโs put it all together!
This is your own AI writing assistant โ powered by the same local model, but now you control:
๐ญ Mode โ what kind of writing you want (email, story, summary...)
๐ก๏ธ Creativity โ how creative or precise the AI should be
๐ Length โ how long the response should be
๐ System Prompt โ you can even edit the AIโs personality directly!
๐ก Try changing the Creativity slider and running the same prompt twice โ notice the difference?
# โโ Personal Writing Assistant โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
MODES = {
"๐ง Write an Email" : "Write a professional email about: ",
"๐ก Brainstorm Ideas" : "Give me 5 creative ideas about: ",
"โ๏ธ Improve My Writing" : "Improve this text and make it clearer: ",
"๐จ Creative Writing" : "Write a short creative story about: ",
"๐ Make a Summary" : "Summarize this in 3 bullet points: ",
}
W = "660px"
LABEL_HTML = """
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:10px 0 2px;font-size:0.82em;color:#1a1a2e;font-weight:bold">
{icon} {text}
</div>"""
# โโ Widgets โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
mode_selector = widgets.Dropdown(
options=list(MODES.keys()),
description="",
layout={"width": W}
)
user_input = widgets.Textarea(
placeholder="Type your topic or text here...",
description="",
layout={"width": W, "height": "90px"}
)
system_prompt_box = widgets.Textarea(
value="You are a helpful AI writing assistant. Help users write emails, essays, and creative content. Keep responses clear and professional. Maximum 3 sentences.",
description="",
layout={"width": W, "height": "70px"}
)
temp_slider = widgets.FloatSlider(
value=0.7, min=0.1, max=1.5, step=0.1,
description="",
layout={"width": W}
)
token_slider = widgets.IntSlider(
value=150, min=50, max=500, step=50,
description="",
layout={"width": W}
)
generate_btn = widgets.Button(
description="โจ Generate",
button_style="primary",
layout={"width": "160px", "height": "40px"}
)
clear_btn = widgets.Button(
description="๐๏ธ Clear",
button_style="warning",
layout={"width": "120px", "height": "40px"}
)
output_area = widgets.Output()
# โโ Header โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #3b5268;
border-radius:12px;padding:18px 20px;margin:10px 0;
color:#e2e8f0;width:644px">
<div style="font-size:0.63em;color:#7ea8c9;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Part 4</div>
<h3 style="color:#f0f6ff;margin:0 0 6px;font-size:1.0em">โ๏ธ Personal Writing Assistant</h3>
<p style="color:#94b8d4;font-size:0.82em;margin:0">
Powered by Llama ๐ฆ | Running on JupyterHub
</p>
</div>
"""))
# โโ Widgets with labels โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
display(HTML(LABEL_HTML.format(icon="๐", text="Mode:")))
display(mode_selector)
display(HTML(LABEL_HTML.format(icon="๐", text="Input:")))
display(user_input)
display(HTML(LABEL_HTML.format(icon="๐ญ", text="System Prompt:")))
display(system_prompt_box)
display(HTML(LABEL_HTML.format(icon="๐ก๏ธ", text="Creativity:")))
display(temp_slider)
display(widgets.HTML(
"<div style='margin:-6px 0 8px 0;font-size:0.75em;color:#999;font-family:monospace'>"
"๐ก Low = Precise (0.1) | Balanced (0.7) | High = Creative (1.5)</div>"
))
display(HTML(LABEL_HTML.format(icon="๐", text="Length:")))
display(token_slider)
display(widgets.HTML(
"<div style='margin:-6px 0 12px 0;font-size:0.75em;color:#888;font-family:monospace'>"
"๐ก 50 = Short | 150 = Medium | 500 = Long</div>"
))
display(widgets.HBox([generate_btn, clear_btn],
layout=widgets.Layout(gap="10px", margin="8px 0 0 0")))
display(output_area)
# โโ Handlers โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def on_generate(b):
with output_area:
clear_output()
if not user_input.value.strip():
display(HTML(
'<span style="color:#f87171;font-family:monospace">'
'โ ๏ธ Please type something in the Input box first!</span>'
))
return
full_prompt = MODES[mode_selector.value] + user_input.value.strip()
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #38bdf8;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#e2e8f0;width:620px">
<div style="font-size:0.63em;color:#38bdf8;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:10px">Generating...</div>
<div style="font-size:0.82em;color:#94b8d4;margin-bottom:6px">
โ๏ธ <strong style="color:#f0f6ff">{mode_selector.value}</strong>
</div>
<div style="display:flex;gap:16px;font-size:0.75em;color:#7a9bb5;margin-bottom:12px">
<span>๐ก๏ธ Creativity: {temp_slider.value}</span>
<span>๐ Length: {token_slider.value} tokens</span>
</div>
<div style="background:#020408;border:1px solid #38bdf822;border-radius:8px;
padding:10px 14px;font-size:0.8em;color:#7dd3fc;font-style:italic">
"{user_input.value.strip()}"
</div>
<div style="margin-top:12px;font-size:0.78em;color:#a6adc8">
โณ Model is thinking... please wait...
</div>
</div>
"""))
try:
response = model.create_chat_completion(
messages=[
{"role": "system", "content": system_prompt_box.value},
{"role": "user", "content": full_prompt}
],
max_tokens=token_slider.value,
temperature=temp_slider.value,
)
result = response["choices"][0]["message"]["content"].strip()
tokens = response["usage"]["completion_tokens"]
clear_output()
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:2px solid #34d399;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#e2e8f0;width:620px">
<div style="font-size:0.63em;color:#34d399;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Output</div>
<div style="font-size:0.9em;font-weight:bold;color:#f0f6ff;margin-bottom:10px">
{mode_selector.value}
</div>
<div style="display:flex;gap:16px;font-size:0.75em;color:#7a9bb5;margin-bottom:12px">
<span>๐ก๏ธ Creativity: {temp_slider.value}</span>
<span>๐ {token_slider.value} tokens</span>
</div>
<div style="background:#0f2336;border:1px solid #38bdf855;border-radius:8px;
padding:10px 14px;margin-bottom:12px;font-size:0.78em;
color:#7dd3fc;font-style:italic">
"{user_input.value.strip()}"
</div>
<div style="background:#020408;border:1px solid #34d39922;border-radius:8px;
padding:14px 16px;font-size:0.82em;color:#cbd5e1;line-height:1.9;
white-space:pre-wrap">{result}</div>
<div style="display:flex;justify-content:space-between;
margin-top:12px;font-size:0.72em;color:#7a9bb5">
<span>๐ Tokens used: {tokens}/{token_slider.value}</span>
<span>๐ก Try changing the creativity slider and run again!</span>
</div>
</div>
"""))
except Exception as e:
clear_output()
display(HTML(
f'<div style="color:#f87171;font-family:monospace;padding:10px">'
f'โ Error: {e}<br>๐ก Try running Step 2 again!</div>'
))
def on_clear(b):
with output_area:
clear_output()
user_input.value = ""
generate_btn.on_click(on_generate)
clear_btn.on_click(on_clear)๐ช Part 5: Few-Shot Learning โ Teach the AI YOUR Styleยถ
โ ๏ธ Make sure you ran Part 1, 2, 3, and 4 first!ยถ
So far, youโve been telling the AI what role to play with a system prompt.
Now letโs try something different โ instead of instructions, youโll show the AI examples.
๐ก What is Few-Shot Learning?
โFew-shotโ means giving the AI just a few examples to learn from.
No training required โ the AI reads your examples and instantly adapts to your style.
How it works:
Paste 2 examples of your own writing (emails, messages, anything!)
Tell the AI what you want it to write
Watch it match your tone, vocabulary, and personality ๐ช
๐ Compare with Part 4:
Part 4 used a system prompt to define the AIโs role.
Part 5 uses your own writing examples instead โ no instructions needed!
Try it with your own writing style and see what happens! ๐
# โโ Part 5: Few-Shot Learning โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
W = "660px"
LABEL_HTML = """
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:10px 0 2px;font-size:0.82em;color:#black;font-weight:bold">
{icon} {text}
</div>"""
# โโ Widgets โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
example1 = widgets.Textarea(
placeholder="Paste your 1st writing example here...\nE.g. an email, message, or paragraph you wrote before.",
description="",
layout={"width": W, "height": "100px"}
)
example2 = widgets.Textarea(
placeholder="Paste your 2nd writing example here...\nThe more examples you give, the better the AI understands your style!",
description="",
layout={"width": W, "height": "100px"}
)
new_task = widgets.Textarea(
placeholder="What do you want the AI to write?\nE.g. 'Write an email about the team meeting tomorrow'",
description="",
layout={"width": W, "height": "80px"}
)
temp_slider = widgets.FloatSlider(
value=0.7, min=0.1, max=1.5, step=0.1,
description="",
layout={"width": W}
)
generate_btn = widgets.Button(
description="๐ช Write in My Style",
button_style="success",
layout={"width": "200px", "height": "40px"}
)
clear_btn = widgets.Button(
description="๐๏ธ Clear",
button_style="warning",
layout={"width": "120px", "height": "40px"}
)
output_area = widgets.Output()
# โโ Header โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #3b5268;
border-radius:12px;padding:18px 20px;margin:10px 0;
color:#e2e8f0;width:644px">
<div style="font-size:0.63em;color:#7ea8c9;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Part 5</div>
<h3 style="color:#f0f6ff;margin:0 0 6px;font-size:1.0em">๐ช Few-Shot Learning</h3>
<p style="color:#94b8d4;font-size:0.82em;margin:0 0 12px">
Give the AI 2 examples of how you write โ it will match your tone automatically!
</p>
<div style="background:#0f2336;border:1px solid #38bdf855;border-radius:8px;
padding:10px 14px;font-size:0.78em;color:#7dd3fc;line-height:1.8">
<strong style="color:#38bdf8">How it works:</strong><br>
1. Paste 2 examples of YOUR writing<br>
2. Tell the AI what you want it to write<br>
3. Watch it match your tone and style ๐ช
</div>
</div>
"""))
# โโ Widgets with labels โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
display(HTML(LABEL_HTML.format(icon="โ๏ธ", text="Example 1:")))
display(example1)
display(HTML(LABEL_HTML.format(icon="โ๏ธ", text="Example 2:")))
display(example2)
display(HTML(LABEL_HTML.format(icon="๐", text="Your Task:")))
display(new_task)
display(HTML(LABEL_HTML.format(icon="๐ก๏ธ", text="Creativity:")))
display(temp_slider)
display(widgets.HTML(
"<div style='margin:-6px 0 12px 0;font-size:0.75em;color:#888;font-family:monospace'>"
"๐ก Low = Precise (0.1) | Balanced (0.7) | High = Creative (1.5)</div>"
))
display(widgets.HBox([generate_btn, clear_btn],
layout=widgets.Layout(gap="10px")))
display(output_area)
# โโ Handlers โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def on_generate(b):
with output_area:
clear_output()
if not example1.value.strip():
display(HTML('<span style="color:#f87171;font-family:monospace">โ ๏ธ Please fill in at least Example 1!</span>'))
return
if not new_task.value.strip():
display(HTML('<span style="color:#f87171;font-family:monospace">โ ๏ธ Please fill in Your Task!</span>'))
return
timer_output = widgets.Output()
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #38bdf8;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#e2e8f0;width:620px">
<div style="font-size:0.63em;color:#38bdf8;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Generating...</div>
<div style="font-size:0.82em;color:#94b8d4;margin-bottom:10px">
๐ช Learning your writing style...
</div>
<div style="background:#020408;border:1px solid #38bdf822;border-radius:8px;
padding:10px 14px;font-size:0.8em;color:#7dd3fc;font-style:italic">
"{new_task.value.strip()}"
</div>
</div>
"""))
display(timer_output)
start_time = time.time()
result_container = {}
def run_model():
prompt = f"Here are examples of how I write:\n\nExample 1:\n{example1.value.strip()}\n"
if example2.value.strip():
prompt += f"\nExample 2:\n{example2.value.strip()}\n"
prompt += f"\nNow write in the SAME style, tone, and voice as the examples above.\nTask: {new_task.value.strip()}"
try:
response = model.create_chat_completion(
messages=[
{"role": "system", "content": "You are a writing assistant. Study the examples provided and write new content that perfectly matches the user's tone, style, and voice. Maximum 3 sentences."},
{"role": "user", "content": prompt}
],
max_tokens=200,
temperature=temp_slider.value,
)
result_container["result"] = response["choices"][0]["message"]["content"].strip()
result_container["tokens"] = response["usage"]["completion_tokens"]
except Exception as e:
result_container["error"] = str(e)
thread = threading.Thread(target=run_model)
thread.start()
# โโ Countdown timer โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
total = 15
while thread.is_alive():
elapsed = int(time.time() - start_time)
remaining = max(0, total - elapsed)
bar_filled = int((elapsed / total) * 15)
bar = "โ" * bar_filled + "โ" * (15 - bar_filled)
with timer_output:
clear_output(wait=True)
display(HTML(
f'<div style="font-family:\'IBM Plex Mono\',monospace;'
f'padding:10px 0;font-size:0.85em;color:#a6adc8;">'
f'โณ Estimated time remaining: <span style="color:#38bdf8;font-weight:bold;font-size:1.1em">{remaining}</span> sec<br>'
f'<span style="color:#38bdf8;letter-spacing:2px">{bar}</span>'
f'</div>'
))
time.sleep(1)
elapsed = int(time.time() - start_time)
clear_output()
if "error" in result_container:
display(HTML(
f'<div style="color:#f87171;font-family:monospace;padding:10px">'
f'โ Error: {result_container["error"]}<br>๐ก Try running Step 2 again!</div>'
))
return
result = result_container["result"]
tokens = result_container["tokens"]
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:2px solid #34d399;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#e2e8f0;width:620px">
<div style="font-size:0.63em;color:#34d399;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Output</div>
<h3 style="color:#f0f6ff;margin:0 0 12px;font-size:1.0em">
๐ช AI writing in YOUR style
</h3>
<div style="background:#0f2336;border:1px solid #38bdf855;border-radius:8px;
padding:10px 14px;margin-bottom:12px;font-size:0.78em;
color:#7dd3fc;font-style:italic">
Task: "{new_task.value.strip()}"
</div>
<div style="background:#020408;border:1px solid #34d39922;border-radius:8px;
padding:14px 16px;font-size:0.82em;color:#cbd5e1;line-height:1.9;
white-space:pre-wrap">{result}</div>
<div style="display:flex;justify-content:space-between;
margin-top:12px;font-size:0.72em;color:#7a9bb5">
<span>๐ Tokens used: {tokens}/200</span>
<span>โฑ๏ธ Generated in: <span style="color:#34d399;font-weight:bold">{elapsed} sec</span></span>
</div>
<div style="margin-top:10px;background:#0d2b1a;border:1px solid #34d39944;
border-radius:6px;padding:8px 12px;font-size:0.78em;color:#6ee7b7">
๐ก Notice how it matches your tone and style? Try changing the task and run again!
</div>
</div>
"""))
def on_clear(b):
with output_area:
clear_output()
example1.value = ""
example2.value = ""
new_task.value = ""
generate_btn.on_click(on_generate)
clear_btn.on_click(on_clear)๐๏ธ Part 5a: AI Control Panel โ Mix Your Perfect Response!ยถ
โ ๏ธ Make sure you ran Parts 1, 2, 3, 4, and 5 first!ยถ
So far youโve learned how system prompts and few-shot examples shape what the AI says.
Now letโs go deeper โ and control how the AI thinks.
๐ง Think of it like a DJ mixing board.
Each slider changes one aspect of how the AI generates text.
Small changes can lead to very different results!
Hereโs what each parameter does:
| Parameter | What it controls |
|---|---|
| ๐ก๏ธ Temperature | How creative vs. predictable the AI is |
| ๐ฏ Top-p | How wide or narrow the AIโs word choices are |
| ๐ซ Freq Penalty | How much the AI avoids repeating the same words |
| ๐ Presence Penalty | How likely the AI is to introduce new topics |
| ๐ Max Tokens | How long the response can be |
Your job: Try the same prompt with different settings and observe what changes.
Use the preset buttons to quickly load interesting combinations, or tune the sliders yourself!
๐ก Tips for best results:
Run at least 3โ5 experiments with the same prompt but different settings
Try asking for a recommendation letter, a poem, or a data science explanation
Add a System Prompt to give the AI a persona (e.g. Professor Eric) and see how it changes the tone
The more experiments you run, the more interesting the comparison in Part 5b will be!
๐พ Every experiment is automatically saved โ run the Compare Results cell below to see all your experiments side by side!
# โโ AI Control Panel โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
W = "660px"
WS = "510px"
LABEL_HTML = """<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
margin:14px 0 3px; font-size:0.92em; color:#1a1a2e; font-weight:800;
letter-spacing:0.03em;">
<span style="color:{accent}; font-size:1.1em;">{icon}</span> {text}
</div>"""
HINT_HTML = """<div style="margin:-2px 0 10px 0; font-size:0.75em; color:#555;
font-family:'IBM Plex Mono',monospace; border-left:3px solid {accent};
padding-left:8px;">{text}</div>"""
# โโ Initialize experiment log โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
try:
results_df
except NameError:
results_df = pd.DataFrame(columns=[
"Prompt", "System Prompt", "Temperature", "Top-p",
"Freq Penalty", "Presence Penalty",
"Max Tokens", "AI Response"
])
# โโ Widgets โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
system_prompt_input = widgets.Textarea(
placeholder="Optional: Give the AI a persona or instructions...\nE.g. 'You are Professor Eric Van Dusen, write a recommendation letter for this student.'",
description="",
layout={"width": W, "height": "80px"}
)
task_input = widgets.Textarea(
placeholder="What do you want the AI to write?",
description="",
layout={"width": W, "height": "80px"}
)
temp_slider = widgets.FloatSlider(
value=0.7, min=0.0, max=2.0, step=0.1,
description="", layout={"width": WS}, readout_format='.1f'
)
top_p_slider = widgets.FloatSlider(
value=0.9, min=0.1, max=1.0, step=0.1,
description="", layout={"width": WS}, readout_format='.1f'
)
freq_slider = widgets.FloatSlider(
value=0.0, min=0.0, max=2.0, step=0.2,
description="", layout={"width": WS}, readout_format='.1f'
)
presence_slider = widgets.FloatSlider(
value=0.0, min=0.0, max=2.0, step=0.2,
description="", layout={"width": WS}, readout_format='.1f'
)
length_slider = widgets.IntSlider(
value=150, min=50, max=500, step=50,
description="", layout={"width": WS}
)
# โโ Preset buttons โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
preset_conservative = widgets.Button(description="๐ Academic", button_style="info", layout={"width": "150px"})
preset_creative = widgets.Button(description="๐จ Creative", button_style="warning", layout={"width": "150px"})
preset_concise = widgets.Button(description="โก Concise", button_style="success", layout={"width": "150px"})
preset_diverse = widgets.Button(description="๐ Diverse", button_style="danger", layout={"width": "150px"})
generate_btn = widgets.Button(
description="๐ Generate",
button_style="primary",
layout={"width": "160px", "height": "40px"}
)
output_area = widgets.Output()
preset_out = widgets.Output()
# โโ Header โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #3b5268;
border-radius:12px;padding:18px 20px;margin:10px 0;
color:#e2e8f0;width:644px">
<div style="font-size:0.63em;color:#7ea8c9;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Part 5a</div>
<h3 style="color:#f0f6ff;margin:0 0 6px;font-size:1.0em">๐๏ธ AI Control Panel</h3>
<p style="color:#94b8d4;font-size:0.82em;margin:0">
Tweak these settings like a DJ mixing a track ๐ง
</p>
</div>
"""))
display(HTML(LABEL_HTML.format(icon="๐ญ", text="System Prompt:", accent="#a855f7")))
display(HTML(HINT_HTML.format(text="Optional โ give the AI a persona, role, or special instructions", accent="#a855f7")))
display(system_prompt_input)
display(HTML(LABEL_HTML.format(icon="๐", text="Prompt:", accent="#3b82f6")))
display(task_input)
display(HTML("""<div style="font-family:'IBM Plex Mono',monospace;
margin:18px 0 6px; font-size:0.92em; color:#1a1a2e; font-weight:800;">
๐๏ธ Mix the Parameters:</div>"""))
display(HTML(LABEL_HTML.format(icon="๐ก๏ธ", text="Creativity:", accent="#f97316")))
display(temp_slider)
display(HTML(HINT_HTML.format(text="Low = safe & precise | High = wild & creative", accent="#f97316")))
display(HTML(LABEL_HTML.format(icon="๐ฏ", text="Word Pool:", accent="#8b5cf6")))
display(top_p_slider)
display(HTML(HINT_HTML.format(text="Low = common words only | High = surprising words", accent="#8b5cf6")))
display(HTML(LABEL_HTML.format(icon="๐ซ", text="Anti-Repeat:", accent="#ef4444")))
display(freq_slider)
display(HTML(HINT_HTML.format(text="Low = allows repetition | High = every word different", accent="#ef4444")))
display(HTML(LABEL_HTML.format(icon="๐ก", text="New Topics:", accent="#10b981")))
display(presence_slider)
display(HTML(HINT_HTML.format(text="Low = stay on topic | High = jump to new topics", accent="#10b981")))
display(HTML(LABEL_HTML.format(icon="๐", text="Length:", accent="#3b82f6")))
display(length_slider)
display(HTML(HINT_HTML.format(text="50 = short | 150 = medium | 500 = long (more tokens = longer wait!)", accent="#3b82f6")))
display(HTML("""<div style="font-family:'IBM Plex Mono',monospace;
margin:18px 0 6px; font-size:0.92em; color:#1a1a2e; font-weight:800;">
๐ญ Or try a preset:</div>"""))
display(widgets.HBox([preset_conservative, preset_creative, preset_concise, preset_diverse],
layout=widgets.Layout(gap="8px")))
display(preset_out)
display(widgets.HTML("<div style='margin:10px 0 6px'></div>"))
display(generate_btn)
display(output_area)
# โโ Preset handlers โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def set_preset(temp, top_p, freq, presence, length, label):
temp_slider.value = temp
top_p_slider.value = top_p
freq_slider.value = freq
presence_slider.value = presence
length_slider.value = length
with preset_out:
clear_output()
display(HTML(f"""
<div style="font-family:'IBM Plex Mono',monospace;background:#0d2b1a;
border:1px solid #34d39944;border-radius:6px;
padding:8px 12px;font-size:0.78em;color:#6ee7b7;margin:6px 0">
โ
Preset loaded: <strong>{label}</strong>
</div>
"""))
preset_conservative.on_click(lambda b: set_preset(0.3, 0.5, 0.0, 0.0, 200, "๐ Academic & Formal"))
preset_creative.on_click( lambda b: set_preset(1.5, 0.95, 0.5, 0.8, 250, "๐จ Creative & Wild"))
preset_concise.on_click( lambda b: set_preset(0.5, 0.7, 0.0, 0.0, 100, "โก Short & Clear"))
preset_diverse.on_click( lambda b: set_preset(0.8, 0.9, 1.5, 1.0, 200, "๐ Maximum Variety"))
# โโ Generate handler โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
def on_generate(b):
global results_df
with output_area:
clear_output()
if not task_input.value.strip():
display(HTML('<span style="color:#f87171;font-family:monospace">โ ๏ธ Please enter a prompt!</span>'))
return
# Auto-calculate estimated time based on token count
est_time = int(length_slider.value / 250 * 20)
total = est_time
timer_output = widgets.Output()
sys_prompt = system_prompt_input.value.strip()
sys_label = f'<div style="font-size:0.75em;color:#a855f7;margin-bottom:6px">๐ญ {sys_prompt[:80]}{"..." if len(sys_prompt) > 80 else ""}</div>' if sys_prompt else ""
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #38bdf8;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#e2e8f0;width:620px">
<div style="font-size:0.63em;color:#38bdf8;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Generating...</div>
{sys_label}
<div style="display:flex;flex-wrap:wrap;gap:12px;font-size:0.75em;
color:#7a9bb5;margin-bottom:12px">
<span>๐ก๏ธ {temp_slider.value}</span>
<span>๐ฏ {top_p_slider.value}</span>
<span>๐ซ {freq_slider.value}</span>
<span>๐ก {presence_slider.value}</span>
<span>๐ {length_slider.value} tokens</span>
</div>
<div style="background:#020408;border:1px solid #38bdf822;border-radius:8px;
padding:10px 14px;font-size:0.8em;color:#7dd3fc;font-style:italic">
"{task_input.value.strip()}"
</div>
<div style="margin-top:8px;font-size:0.75em;color:#f9e2af;">
โ ๏ธ Estimated wait: ~{est_time} sec for {length_slider.value} tokens
</div>
</div>
"""))
display(timer_output)
start_time = time.time()
result_container = {}
def run_model():
try:
messages = []
if sys_prompt:
messages.append({"role": "system", "content": sys_prompt})
messages.append({"role": "user", "content": task_input.value.strip()})
response = model.create_chat_completion(
messages=messages,
max_tokens=length_slider.value,
temperature=temp_slider.value,
top_p=top_p_slider.value,
frequency_penalty=freq_slider.value,
presence_penalty=presence_slider.value,
)
result_container["result"] = response["choices"][0]["message"]["content"].strip()
result_container["tokens"] = response["usage"]["completion_tokens"]
except Exception as e:
result_container["error"] = str(e)
thread = threading.Thread(target=run_model)
thread.start()
# โโ Hatching chick countdown โโโโโโโโโโโโโโโโโโโโโโโโโโ
while thread.is_alive():
elapsed = int(time.time() - start_time)
remaining = max(0, total - elapsed)
progress = min(elapsed / total, 1.0) if total > 0 else 1.0
if progress < 0.25:
emoji = "๐ฅ"
msg = "Warming up the egg..."
color = "#f9e2af"
elif progress < 0.5:
emoji = "๐ฅ๐ฅ"
msg = "Something is moving inside..."
color = "#fab387"
elif progress < 0.75:
emoji = "๐ฃ"
msg = "Almost there..."
color = "#a6e3a1"
else:
emoji = "๐ฅ"
msg = "Coming out!"
color = "#89dceb"
bar_filled = int(progress * 20)
bar = "๐ก" * bar_filled + "โฌ" * (20 - bar_filled)
with timer_output:
clear_output(wait=True)
display(HTML(
f'<div style="font-family:\'IBM Plex Mono\',monospace;'
f'padding:12px 0;font-size:1.1em;text-align:left;">'
f'<span style="font-size:2em">{emoji}</span><br>'
f'<span style="color:{color};font-weight:bold;">{msg}</span><br>'
f'<span style="font-size:0.8em;color:#a6adc8;">โณ {remaining} sec remaining</span><br>'
f'<span style="letter-spacing:1px;font-size:0.85em">{bar}</span>'
f'</div>'
))
time.sleep(1)
# Hatched!
with timer_output:
clear_output(wait=True)
display(HTML(
'<div style="font-family:\'IBM Plex Mono\',monospace;'
'padding:12px 0;font-size:1.1em;">'
'<span style="font-size:2em">๐โจ</span><br>'
'<span style="color:#a6e3a1;font-weight:bold;">Ready! The chick has hatched!</span>'
'</div>'
))
time.sleep(0.5)
elapsed = int(time.time() - start_time)
clear_output()
if "error" in result_container:
display(HTML(
f'<div style="color:#f87171;font-family:monospace;padding:10px">'
f'โ Error: {result_container["error"]}<br>๐ก Try running Step 2 again!</div>'
))
return
result = result_container["result"]
tokens = result_container["tokens"]
prompt_preview = task_input.value.strip()[:50] + ("..." if len(task_input.value.strip()) > 50 else "")
response_preview = result[:300] + ("..." if len(result) > 300 else "")
new_row = {
"Prompt": prompt_preview,
"System Prompt": sys_prompt[:50] + ("..." if len(sys_prompt) > 50 else "") if sys_prompt else "(none)",
"Temperature": temp_slider.value,
"Top-p": top_p_slider.value,
"Freq Penalty": freq_slider.value,
"Presence Penalty": presence_slider.value,
"Max Tokens": length_slider.value,
"AI Response": response_preview
}
results_df = pd.concat([results_df, pd.DataFrame([new_row])], ignore_index=True)
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:2px solid #34d399;
border-radius:12px;padding:18px 20px;margin-top:10px;
color:#e2e8f0;width:620px">
<div style="font-size:0.63em;color:#34d399;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Output</div>
<div style="display:flex;flex-wrap:wrap;gap:12px;font-size:0.75em;
color:#7a9bb5;margin-bottom:12px">
<span>๐ก๏ธ Creativity: {temp_slider.value}</span>
<span>๐ฏ Word Pool: {top_p_slider.value}</span>
<span>๐ซ Anti-Repeat: {freq_slider.value}</span>
<span>๐ก New Topics: {presence_slider.value}</span>
</div>
<div style="background:#0f2336;border:1px solid #38bdf855;border-radius:8px;
padding:10px 14px;margin-bottom:12px;font-size:0.78em;
color:#7dd3fc;font-style:italic">
"{task_input.value.strip()}"
</div>
<div style="background:#020408;border:1px solid #34d39922;border-radius:8px;
padding:14px 16px;font-size:0.82em;color:#cbd5e1;line-height:1.9;
white-space:pre-wrap">{result}</div>
<div style="display:flex;justify-content:space-between;
margin-top:12px;font-size:0.72em;color:#7a9bb5">
<span>๐ Tokens used: {tokens}/{length_slider.value}</span>
<span>โฑ๏ธ Generated in: <span style="color:#34d399;font-weight:bold">{elapsed} sec</span>
| ๐พ Experiment #{len(results_df)} saved!</span>
</div>
<div style="margin-top:10px;background:#0d2b1a;border:1px solid #34d39944;
border-radius:6px;padding:8px 12px;font-size:0.78em;color:#6ee7b7">
๐ก Run the Compare Results cell below to see all your experiments!
</div>
</div>
"""))
generate_btn.on_click(on_generate)๐ Part 5b: Compare Your Experiment Resultsยถ
โ ๏ธ Run Part 5a at least 2-3 times with different settings first!ยถ
Now letโs look at all your experiments side by side.
๐ก What to look for:
How did changing Temperature affect the writing style?
Did higher Freq Penalty make the response more varied?
Which preset gave the best result for your prompt?
Did adding a System Prompt (like Professor Eric) change the tone?
๐จ๏ธ This table can be printed! Use File โ Print to save your experiment results as a PDF.
Run the cell below to see your full experiment log! ๐
# โโ Compare Experiment Results โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# Only initialize if results_df doesn't exist yet
try:
results_df
except NameError:
results_df = pd.DataFrame(columns=[
"Prompt", "System Prompt", "Temperature", "Top-p",
"Freq Penalty", "Presence Penalty", "Max Tokens", "AI Response"
])
refresh_btn = widgets.Button(
description="๐ Refresh Results",
button_style="info",
layout={"width": "180px", "height": "40px"}
)
results_out = widgets.Output()
display(refresh_btn)
display(results_out)
def show_results(_):
with results_out:
clear_output()
if len(results_df) == 0:
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#1a1200;border:1px solid #f59e0b;
border-radius:12px;padding:18px 20px;margin:10px 0;
color:#e2e8f0;width:644px">
<div style="font-size:0.63em;color:#f59e0b;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Part 5b</div>
<h3 style="color:#f0f6ff;margin:0 0 10px;font-size:1.0em">โ ๏ธ No Results Yet!</h3>
<p style="color:#fcd34d;font-size:0.82em;margin:0">
Go to Part 5a, adjust the sliders, and click Generate at least once.<br>
Then come back and click Refresh Results.
</p>
</div>
"""))
else:
display(HTML(f"""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d1420;border:1px solid #3b5268;
border-radius:12px;padding:18px 20px;margin:10px 0;
color:#e2e8f0;width:644px">
<div style="font-size:0.63em;color:#7ea8c9;text-transform:uppercase;
letter-spacing:0.15em;margin-bottom:8px">Part 5b</div>
<h3 style="color:#f0f6ff;margin:0 0 6px;font-size:1.0em">๐ Experiment Results</h3>
<p style="color:#94b8d4;font-size:0.82em;margin:0">
{len(results_df)} experiment(s) recorded โ compare your settings below!
</p>
</div>
"""))
display_df = results_df.copy()
styled = display_df.style \
.format({
"Temperature" : "{:.1f}",
"Top-p" : "{:.1f}",
"Freq Penalty" : "{:.1f}",
"Presence Penalty" : "{:.1f}",
}) \
.set_properties(**{
"text-align" : "left",
"white-space" : "pre-wrap",
"max-width" : "300px",
"font-family" : "IBM Plex Mono, Fira Code, monospace",
"font-size" : "0.82em",
"color" : "#cbd5e1",
"background" : "#0d1420",
"padding" : "8px 12px",
}) \
.set_table_styles([
{"selector": "th", "props": [
("background-color", "#1e3a5f"),
("color", "#7dd3fc"),
("font-weight", "bold"),
("text-align", "center"),
("padding", "10px 12px"),
("font-family", "IBM Plex Mono, monospace"),
("font-size", "0.82em"),
("border-bottom", "2px solid #38bdf8"),
]},
{"selector": "tr:nth-child(even) td", "props": [
("background-color", "#111827"),
]},
{"selector": "tr:hover td", "props": [
("background-color", "#1e3a5f"),
]},
{"selector": "table", "props": [
("border-collapse", "collapse"),
("width", "100%"),
]},
]) \
.highlight_max(subset=["Temperature"], color="#2d1f00") \
.highlight_min(subset=["Temperature"], color="#0f2336")
display(styled)
results_df.to_csv("experiment_results.csv", index=False)
display(HTML("""
<div style="font-family:'IBM Plex Mono','Fira Code',monospace;
background:#0d2b1a;border:1px solid #34d39944;
border-radius:8px;padding:12px 16px;margin-top:12px;
font-size:0.78em;color:#6ee7b7;width:620px">
<div style="margin-bottom:6px">
๐พ Results saved to <strong style="color:#34d399">experiment_results.csv</strong>
</div>
<div style="color:#94b8d4">
๐ Look at the Temperature column โ how did changing it affect the AI response?
</div>
</div>
"""))
refresh_btn.on_click(show_results)
show_results(None)๐ชข Part 6: Hangman Quizยถ
โ ๏ธ Review Parts 3โ5 before starting!
Test your AI Prompting mastery with this interactive challenge. Can you save the chick? ๐ฃ
๐ Game Rulesยถ
| ๐ฏ Objective | Answer 5 random questions correctly |
| โค๏ธ Lives | 6 wrong answers before game over ๐ |
| ๐ Topics | Temperature ยท System Prompts ยท Few-Shot Learning ยท Penalties |
๐ก Quick Referenceยถ
| Concept | Remember This |
|---|---|
| ๐ก๏ธ Temperature | High = Creative ยท Low = Predictable |
| ๐ญ System Prompt | The AIโs โbackstageโ rules โ sets the persona |
| ๐ช Few-Shot | Show, donโt tell โ lead with examples |
| ๐๏ธ Penalties | Freq = less repetition ยท Presence = more new topics |
๐ Run the cell below to start the challenge!
# โโ Part 6: Hangman Quiz โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
QUESTION_POOL = [
{
"q": "What does a high Temperature (e.g. 1.5) do to the AI's output?",
"options": ["Makes responses shorter", "Makes output more creative and unpredictable", "Makes the AI repeat itself more", "Makes the AI refuse to answer"],
"answer": 1, "explanation": "High temperature = more randomness = more creative and surprising output. Low temperature = safer and more predictable.", "part": "Part 4",
},
{
"q": "What does a low Temperature (e.g. 0.2) do to the AI's output?",
"options": ["Makes output more creative", "Makes the AI faster", "Makes output safer and more predictable", "Makes the AI use more tokens"],
"answer": 2, "explanation": "Low temperature means the AI sticks to the most likely words โ safe, consistent, and precise.", "part": "Part 4",
},
{
"q": "What is the main purpose of a System Prompt?",
"options": ["It sets the maximum response length", "It gives the AI a persona, role, or rules to follow", "It controls the temperature automatically", "It translates the user's message"],
"answer": 1, "explanation": "The system prompt acts like backstage instructions โ it shapes how the AI behaves throughout the conversation.", "part": "Part 3",
},
{
"q": "In Part 3c, you saw the same question answered by 4 different personas. What changed each time?",
"options": ["The Temperature slider", "The Max Tokens setting", "The System Prompt", "The user's question"],
"answer": 2, "explanation": "Same question, different system prompts = totally different answers. The system prompt defines the AI's identity.", "part": "Part 3c",
},
{
"q": "What is Few-Shot Learning?",
"options": ["Training the AI from scratch with a small dataset", "Showing the AI 2-3 examples so it matches your style", "Setting Temperature to a low value", "Using a shorter system prompt"],
"answer": 1, "explanation": "Few-shot learning shows the AI examples of your writing style โ it then mimics your tone and format without any training.", "part": "Part 5",
},
{
"q": "In Few-Shot Learning, what do the examples teach the AI?",
"options": ["New vocabulary it didn't know before", "Your writing style, tone, and format", "How to use the sliders correctly", "How to avoid hallucination"],
"answer": 1, "explanation": "The examples show the AI patterns to follow โ like tone, style, and format โ without any retraining required.", "part": "Part 5",
},
{
"q": "The AI confidently tells you that Einstein invented the telephone. What is this called?",
"options": ["Sycophancy", "Hallucination", "Few-shot error", "Temperature spike"],
"answer": 1, "explanation": "Hallucination is when the AI invents false information and presents it confidently. Always verify AI output!", "part": "Part 3a",
},
{
"q": "You say 'The Earth is flat, right?' and the AI agrees. What behavior is this?",
"options": ["Hallucination", "High temperature effect", "Sycophancy", "Context window overflow"],
"answer": 2, "explanation": "Sycophancy is when the AI agrees with you even when you're wrong โ it's being a 'Yes-Man'.", "part": "Part 3a",
},
{
"q": "What does the Freq Penalty (Anti-Repeat) slider control?",
"options": ["How long the response is", "How creative the AI is", "How much the AI avoids repeating the same words", "How fast the AI responds"],
"answer": 2, "explanation": "Frequency penalty penalises words that have already appeared โ higher value = more varied vocabulary.", "part": "Part 5a",
},
{
"q": "You want the AI to stay focused on one topic and not introduce new ideas. Which slider should stay LOW?",
"options": ["Temperature", "Top-p", "Presence Penalty", "Max Tokens"],
"answer": 2, "explanation": "Presence Penalty encourages new topics. Keep it low for focused, on-topic responses.", "part": "Part 5a",
},
{
"q": "What is the best way to get a longer AI response?",
"options": ["Set Temperature to 2.0", "Increase the Max Tokens slider", "Set Freq Penalty to 0", "Write a longer prompt"],
"answer": 1, "explanation": "Max Tokens directly controls how long the response can be. Temperature affects style, not length.", "part": "Part 5a",
},
{
"q": "The AI cuts off mid-sentence. What is the most likely cause?",
"options": ["Temperature is too high", "Max Tokens is set too low", "Freq Penalty is too high", "The system prompt is too long"],
"answer": 1, "explanation": "When Max Tokens runs out, the AI stops โ even mid-sentence. Increase the Length slider to fix this.", "part": "Part 5a",
},
{
"q": "Which preset in Part 5a is best for writing a formal academic essay?",
"options": ["Creative & Wild", "Maximum Variety", "Academic & Formal", "Short & Clear"],
"answer": 2, "explanation": "Academic & Formal uses low temperature (0.3) and conservative settings โ safe, precise, and professional.", "part": "Part 5a",
},
{
"q": "What does Top-p (Word Pool) control?",
"options": ["The maximum number of words in the response", "How wide or narrow the AI's word choices are", "How often the AI changes topics", "The AI's response speed"],
"answer": 1, "explanation": "Top-p controls the pool of words the AI picks from. Low = common words only. High = surprising and varied words.", "part": "Part 5a",
},
{
"q": "In Part 4, you gave the AI 2 writing examples and asked it to write in your style. What technique is this?",
"options": ["Zero-shot prompting", "System prompt engineering", "Few-shot learning", "Temperature tuning"],
"answer": 2, "explanation": "Providing examples to guide the AI's style is called few-shot learning โ show, don't tell!", "part": "Part 4",
},
{
"q": "Zero-shot prompting means:",
"options": ["Setting Temperature to zero", "Giving the AI no examples โ just asking directly", "Using no system prompt", "Setting Max Tokens to zero"],
"answer": 1, "explanation": "Zero-shot = no examples provided. The AI answers based purely on its training. Few-shot = with examples.", "part": "Part 4",
},
{
"q": "In the Writing Assistant (Part 4), what does the System Prompt box control?",
"options": ["The length of the response", "The AI's persona and behaviour rules", "The creativity level", "The language of the response"],
"answer": 1, "explanation": "The system prompt defines who the AI is and how it should behave โ change it to completely transform the AI's personality.", "part": "Part 4",
},
{
"q": "Professor Eric Van Dusen responds warmly and encouragingly. What makes him different from the Data 100 TA?",
"options": ["Different Temperature setting", "Different Max Tokens", "Different System Prompt", "Different user question"],
"answer": 2, "explanation": "Each persona in Part 3c has a unique system prompt that defines their personality, tone, and style.", "part": "Part 3c",
},
{
"q": "What should you always do after receiving important information from an AI?",
"options": ["Increase the Temperature and ask again", "Trust it completely โ AI is always accurate", "Verify the facts from reliable sources", "Copy it directly without reading"],
"answer": 2, "explanation": "Never blindly trust AI output. Hallucination is real โ always verify important facts from reliable sources.", "part": "Part 3a",
},
{
"q": "Which of these is the best prompt for Few-Shot Learning in Part 5?",
"options": ["'Write like me'", "Paste 2-3 examples of your own writing, then give a task", "'Be creative and match my style'", "'Use high temperature to match my tone'"],
"answer": 1, "explanation": "Show the AI concrete examples of your writing โ the more specific the examples, the better it matches your style.", "part": "Part 5",
},
]
NUM_QUESTIONS = 5
MAX_WRONG = 6
pool_json = json.dumps(QUESTION_POOL)
html = f"""
<style>
.hm-wrap {{
font-family: 'IBM Plex Mono', 'Fira Code', monospace;
background: #080c12;
border-radius: 14px;
padding: 20px;
max-width: 860px;
}}
.hm-header {{
background: linear-gradient(135deg, #0a1628, #0d1f3c);
border: 1px solid #1e3a5f;
border-radius: 12px;
padding: 16px 20px;
margin-bottom: 16px;
display: flex;
justify-content: space-between;
align-items: center;
}}
.hm-title {{ color: #f0f6ff; font-weight: 700; font-size: 1.05em; }}
.hm-sub {{ color: #475569; font-size: 0.72em; margin-top: 4px; }}
.hm-body {{
display: flex;
gap: 20px;
align-items: flex-start;
margin-bottom: 16px;
}}
.hm-scaffold {{
flex-shrink: 0;
background: #0d1420;
border: 1px solid #1e293b;
border-radius: 12px;
padding: 12px;
display: flex;
flex-direction: column;
align-items: center;
gap: 8px;
width: 160px;
}}
.hm-lives {{
font-size: 0.72em;
color: #475569;
text-align: center;
}}
.hm-lives span {{ color: #f87171; font-weight: 700; }}
.hm-qpanel {{ flex: 1; }}
.hm-progress {{
font-size: 0.72em;
color: #475569;
margin-bottom: 10px;
display: flex;
justify-content: space-between;
}}
.hm-progress .correct-count {{ color: #34d399; font-weight: 700; }}
.hm-progress .wrong-count {{ color: #f87171; font-weight: 700; }}
.q-block {{
background: #0d1420;
border: 1px solid #1e293b;
border-radius: 12px;
padding: 16px 18px;
}}
.q-meta {{ display: flex; align-items: center; gap: 8px; margin-bottom: 10px; }}
.q-part {{
background: #1e3a5f; color: #38bdf8; font-size: 0.62em; font-weight: 700;
padding: 2px 9px; border-radius: 20px; text-transform: uppercase; letter-spacing: 0.05em;
}}
.q-num {{ color: #475569; font-size: 0.68em; }}
.q-text {{ color: #e2e8f0; font-size: 0.88em; font-weight: 600; line-height: 1.6; margin-bottom: 12px; }}
.opt {{
background: #111827; border: 1px solid #1e293b; border-radius: 8px;
padding: 11px 14px; margin-bottom: 7px; cursor: pointer;
display: flex; align-items: flex-start; gap: 10px;
transition: border-color 0.15s, background 0.15s;
}}
.opt:hover {{ border-color: #38bdf8; background: #0f1f35; }}
.opt-letter {{
background: #1e293b; color: #64748b; font-size: 0.70em; font-weight: 700;
width: 20px; height: 20px; border-radius: 50%;
display: flex; align-items: center; justify-content: center; flex-shrink: 0; margin-top: 1px;
}}
.opt-text {{ color: #94a3b8; font-size: 0.82em; line-height: 1.5; }}
.opt.correct {{ background: #052e16; border-color: #34d399; cursor: default; }}
.opt.correct .opt-letter {{ background: #34d399; color: #052e16; }}
.opt.correct .opt-text {{ color: #a7f3d0; }}
.opt.wrong {{ background: #1c0a0a; border-color: #f87171; cursor: default; }}
.opt.wrong .opt-letter {{ background: #f87171; color: #1c0a0a; }}
.opt.wrong .opt-text {{ color: #fca5a5; }}
.opt.locked {{ cursor: default; }}
.opt.locked:hover {{ border-color: #1e293b; background: #111827; }}
.opt.locked.correct:hover {{ border-color: #34d399; background: #052e16; }}
.opt.dim {{ opacity: 0.35; cursor: default; pointer-events: none; }}
.feedback {{
margin-top: 8px; padding: 10px 14px; border-radius: 0 8px 8px 0;
font-size: 0.78em; line-height: 1.6; color: #cbd5e1; display: none;
}}
.feedback.show {{ display: block; }}
.feedback.correct-fb {{ background: #34d39910; border-left: 3px solid #34d399; }}
.feedback.wrong-fb {{ background: #f8717110; border-left: 3px solid #f87171; }}
.next-btn {{
margin-top: 12px; padding: 8px 20px; border-radius: 8px;
background: #1e3a5f; border: 1px solid #38bdf8; color: #38bdf8;
font-family: inherit; font-size: 0.82em; font-weight: 700;
cursor: pointer; display: none; transition: background 0.15s;
}}
.next-btn:hover {{ background: #0f1f35; }}
.next-btn.visible {{ display: inline-block; }}
.final-screen {{
background: #0d1420; border-radius: 12px; padding: 28px 24px;
text-align: center; display: none;
}}
.final-screen.show {{ display: block; }}
.final-big {{ font-size: 3.0em; font-weight: 700; line-height: 1; margin-bottom: 8px; }}
.final-msg {{ font-size: 0.88em; margin-bottom: 16px; }}
.retry-btn {{
display: inline-block; padding: 10px 24px; border-radius: 8px;
font-family: inherit; font-size: 0.85em; font-weight: 700;
cursor: pointer; border: 1px solid; transition: opacity 0.15s;
}}
.retry-btn:hover {{ opacity: 0.75; }}
.hm-svg line, .hm-svg circle, .hm-svg path {{
stroke-linecap: round;
transition: opacity 0.3s;
}}
.hm-stroke {{ opacity: 0; }}
.hm-stroke.drawn {{ opacity: 1; }}
</style>
<div class="hm-wrap">
<div class="hm-header">
<div>
<div style="font-size:0.58em;color:#38bdf8;text-transform:uppercase;letter-spacing:0.15em;margin-bottom:3px">Part 6 โ Final Quiz</div>
<div class="hm-title">๐ชข Hangman Quiz</div>
<div class="hm-sub">Wrong answer = one stroke. 6 strokes = game over. ๐</div>
</div>
<div style="text-align:right;color:#475569;font-size:0.70em">
<div>{NUM_QUESTIONS} questions</div>
<div style="color:#38bdf8;font-weight:700">{MAX_WRONG} lives</div>
</div>
</div>
<div class="hm-body">
<div class="hm-scaffold">
<svg class="hm-svg" width="120" height="140" viewBox="0 0 120 140">
<line x1="10" y1="135" x2="110" y2="135" stroke="#1e3a5f" stroke-width="3"/>
<line x1="30" y1="135" x2="30" y2="10" stroke="#1e3a5f" stroke-width="3"/>
<line x1="30" y1="10" x2="75" y2="10" stroke="#1e3a5f" stroke-width="3"/>
<line x1="75" y1="10" x2="75" y2="28" stroke="#1e3a5f" stroke-width="3"/>
<circle class="hm-stroke" id="hm-s1" cx="75" cy="38" r="10" stroke="#f87171" stroke-width="2.5" fill="none"/>
<line class="hm-stroke" id="hm-s2" x1="75" y1="48" x2="75" y2="90" stroke="#f87171" stroke-width="2.5"/>
<line class="hm-stroke" id="hm-s3" x1="75" y1="60" x2="52" y2="78" stroke="#f87171" stroke-width="2.5"/>
<line class="hm-stroke" id="hm-s4" x1="75" y1="60" x2="98" y2="78" stroke="#f87171" stroke-width="2.5"/>
<line class="hm-stroke" id="hm-s5" x1="75" y1="90" x2="52" y2="115" stroke="#f87171" stroke-width="2.5"/>
<line class="hm-stroke" id="hm-s6" x1="75" y1="90" x2="98" y2="115" stroke="#f87171" stroke-width="2.5"/>
</svg>
<div class="hm-lives">Lives left: <span id="lives-left">{MAX_WRONG}</span></div>
<div id="wrong-pills" style="display:flex;flex-wrap:wrap;gap:3px;justify-content:center;margin-top:4px"></div>
</div>
<div class="hm-qpanel">
<div class="hm-progress">
<span>Question <span id="q-current">1</span> / {NUM_QUESTIONS}</span>
<span><span class="correct-count" id="correct-count">0</span> correct <span class="wrong-count" id="wrong-count">0</span> wrong</span>
</div>
<div id="q-container"></div>
</div>
</div>
<div class="final-screen" id="final-screen"></div>
</div>
<script>
(function() {{
var POOL = {pool_json};
var N = {NUM_QUESTIONS};
var MAX_WRONG = {MAX_WRONG};
var wrongCount = 0;
var correctCount = 0;
var qIndex = 0;
var questions = [];
var gameOver = false;
var qContainer = document.getElementById('q-container');
var livesLeft = document.getElementById('lives-left');
var wrongPills = document.getElementById('wrong-pills');
var qCurrent = document.getElementById('q-current');
var correctEl = document.getElementById('correct-count');
var wrongEl = document.getElementById('wrong-count');
var finalScreen = document.getElementById('final-screen');
function shuffle(arr) {{
var a = arr.slice();
for (var i = a.length - 1; i > 0; i--) {{
var j = Math.floor(Math.random() * (i + 1));
var t = a[i]; a[i] = a[j]; a[j] = t;
}}
return a;
}}
function drawStroke(n) {{
var el = document.getElementById('hm-s' + n);
if (el) el.classList.add('drawn');
}}
function renderQuestion(qi) {{
var q = questions[qi];
qCurrent.textContent = qi + 1;
var opts = '';
q.options.forEach(function(opt, oi) {{
opts +=
'<div class="opt" id="opt-' + oi + '" onclick="hmAnswer(' + oi + ')">' +
'<div class="opt-letter">' + String.fromCharCode(65 + oi) + '</div>' +
'<div class="opt-text">' + opt + '</div>' +
'</div>';
}});
qContainer.innerHTML =
'<div class="q-block">' +
'<div class="q-meta">' +
'<span class="q-part">' + q.part + '</span>' +
'<span class="q-num">Q' + (qi + 1) + '</span>' +
'</div>' +
'<div class="q-text">' + q.q + '</div>' +
opts +
'<div class="feedback" id="fb"></div>' +
'<button class="next-btn" id="next-btn" onclick="hmNext()">Next question โ</button>' +
'</div>';
}}
window.hmAnswer = function(oi) {{
if (gameOver) return;
var q = questions[qIndex];
var correct = q.answer;
var isRight = (oi === correct);
var fb = document.getElementById('fb');
var nextBtn = document.getElementById('next-btn');
for (var j = 0; j < q.options.length; j++) {{
var el = document.getElementById('opt-' + j);
if (!el) continue;
el.classList.add('locked');
el.removeAttribute('onclick');
if (j === correct) el.classList.add('correct');
else if (j === oi) el.classList.add('wrong');
else el.classList.add('dim');
}}
if (isRight) {{
correctCount++;
correctEl.textContent = correctCount;
fb.className = 'feedback show correct-fb';
fb.innerHTML = 'โ
<strong style="color:#34d399">Correct!</strong> ' + q.explanation;
}} else {{
wrongCount++;
wrongEl.textContent = wrongCount;
drawStroke(wrongCount);
livesLeft.textContent = MAX_WRONG - wrongCount;
var pill = document.createElement('div');
pill.style.cssText = 'background:#f8717120;border:1px solid #f87171;border-radius:4px;padding:1px 6px;font-size:0.65em;color:#f87171';
pill.textContent = 'โ';
wrongPills.appendChild(pill);
fb.className = 'feedback show wrong-fb';
fb.innerHTML = 'โ <strong style="color:#f87171">Wrong.</strong> ' + q.explanation;
if (wrongCount >= MAX_WRONG) {{
gameOver = true;
setTimeout(function() {{ showFinal(false); }}, 1200);
return;
}}
}}
if (qIndex >= N - 1) {{
nextBtn.textContent = '๐ See results';
nextBtn.classList.add('visible');
}} else {{
nextBtn.classList.add('visible');
}}
}};
window.hmNext = function() {{
qIndex++;
if (qIndex >= N) {{
showFinal(true);
}} else {{
renderQuestion(qIndex);
}}
}};
function showFinal(completed) {{
document.querySelector('.hm-body').style.display = 'none';
finalScreen.classList.add('show');
var pct = Math.round(correctCount / N * 100);
var color, emoji, msg;
if (!completed) {{
color = '#f87171'; emoji = '๐'; msg = 'Game Over! The man has been hanged...';
}} else if (correctCount === N) {{
color = '#34d399'; emoji = '๐โจ'; msg = 'Perfect! The chick is proud of you!';
}} else if (correctCount >= Math.ceil(N * 0.7)) {{
color = '#f9e2af'; emoji = '๐ฃ'; msg = 'Close call โ but you survived!';
}} else {{
color = '#f87171'; emoji = '๐ฅ'; msg = 'The egg needs more warming... Review the notebook!';
}}
finalScreen.innerHTML =
'<div class="final-big" style="color:' + color + '">' + emoji + '</div>' +
'<div style="color:' + color + ';font-weight:700;font-size:1.1em;margin-bottom:6px">' + msg + '</div>' +
'<div class="final-msg" style="color:#94b8d4">' +
'Score: <strong style="color:' + color + '">' + correctCount + '/' + N + ' (' + pct + '%)</strong>' +
' ยท Wrong answers: <strong style="color:#f87171">' + wrongCount + '</strong>' +
'</div>' +
'<button class="retry-btn" style="background:' + color + '18;border-color:' + color + ';color:' + color + '" ' +
'onclick="hmRestart()">๐ Play Again</button>';
}}
window.hmRestart = function() {{
wrongCount = correctCount = qIndex = 0;
gameOver = false;
for (var i = 1; i <= MAX_WRONG; i++) {{
var el = document.getElementById('hm-s' + i);
if (el) el.classList.remove('drawn');
}}
livesLeft.textContent = MAX_WRONG;
wrongPills.innerHTML = '';
correctEl.textContent = '0';
wrongEl.textContent = '0';
document.querySelector('.hm-body').style.display = 'flex';
finalScreen.classList.remove('show');
questions = shuffle(POOL).slice(0, N);
renderQuestion(0);
}};
questions = shuffle(POOL).slice(0, N);
renderQuestion(0);
}})();
</script>
"""
display(HTML(html))