You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
This lesson covers the foundational building blocks of neural networks — from the single neuron (perceptron) through multi-layer architectures, activation functions, and the backpropagation algorithm that makes learning possible.
The perceptron is the simplest form of a neural network — a single artificial neuron that takes multiple inputs, multiplies each by a weight, sums them, adds a bias, and passes the result through an activation function.
output = activation(w1*x1 + w2*x2 + ... + wn*xn + b)
Where:
x1, x2, ..., xn are the input featuresw1, w2, ..., wn are the learnable weightsb is the bias termactivation is the activation functionimport numpy as np
class Perceptron:
def __init__(self, n_features, lr=0.01):
self.weights = np.zeros(n_features)
self.bias = 0.0
self.lr = lr
def predict(self, x):
z = np.dot(x, self.weights) + self.bias
return 1 if z >= 0 else 0
def train(self, X, y, epochs=100):
for _ in range(epochs):
for xi, yi in zip(X, y):
pred = self.predict(xi)
error = yi - pred
self.weights += self.lr * error * xi
self.bias += self.lr * error
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.