You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Generative models learn to generate new data that resembles the training data. Unlike discriminative models that learn the boundary between classes, generative models learn the underlying distribution of the data itself.
| Aspect | Discriminative Models | Generative Models |
|---|---|---|
| Goal | Learn P(y | x) — the probability of a label given input |
| Output | A class label or value | New data samples (images, text, audio) |
| Examples | Logistic regression, CNNs, SVMs | GANs, VAEs, diffusion models |
| Analogy | A critic who judges art | An artist who creates art |
An autoencoder learns to compress data into a low-dimensional representation (the latent space) and then reconstruct it. While not directly generative, autoencoders are the foundation for variational autoencoders.
Input → [Encoder] → Latent Code (z) → [Decoder] → Reconstructed Input
import torch.nn as nn
class Autoencoder(nn.Module):
def __init__(self, input_dim=784, latent_dim=32):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(input_dim, 256),
nn.ReLU(),
nn.Linear(256, latent_dim),
)
self.decoder = nn.Sequential(
nn.Linear(latent_dim, 256),
nn.ReLU(),
nn.Linear(256, input_dim),
nn.Sigmoid(),
)
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.