Decision Boundary Visualizer

Algorithm Parameters
025025
  • Decision Boundary (Class A)
  • Decision Boundary (Class B)
  • Class A Points
  • Class B Points
How Decision Boundaries Work

K-Nearest Neighbors (KNN) is a non-parametric method used for classification.

1. Distance calculation - For each point in the grid, calculate its distance to all training points.

2. K selection - Find the K nearest training points to the grid point.

3. Majority vote - Classify the grid point based on the most frequent class among its K nearest neighbors.

4. Decision boundary - The boundary forms where the majority class changes.

Data Points
X
Y
Class
1
1
A
2
2
A
3
3
A
1
3
B
2
1
B
3
1
B
Classification Algorithms Explained

K-Nearest Neighbors (KNN)

K-Nearest Neighbors (KNN) is a non-parametric method that makes decisions based on the proximity of examples.

Distance calculation

For a test point p(x,y), calculate its distance to all training points:

Euclidean distance: d(p, p_i) = √[(x - x_i)² + (y - y_i)²]

Neighborhood selection

Sort all training points by distance and select the k closest points.

Majority voting

Count the frequency of each class among the k nearest neighbors:

For each class c, count N_c = number of neighbors belonging to class c

Predict class = argmax_c(N_c)

Decision boundary formation

The boundary appears where the majority class changes, creating complex, non-linear decision regions that adapt to local data patterns.

Support Vector Machine (Linear)

Support Vector Machine (SVM) finds the optimal hyperplane that maximizes the margin between classes.

Linear decision function

For each point (x,y), compute:

f(x,y) = w₁x + w₂y + b

Where w = [w₁, w₂] is the weight vector and b is the bias term

Classification rule

If f(x,y) ≥ 0, classify as Class A

If f(x,y) < 0, classify as Class B

Decision boundary equation

The boundary is defined where f(x,y) = 0

This creates the line equation: w₁x + w₂y + b = 0

Geometric interpretation

  • Vector w is perpendicular to the decision boundary
  • The distance from origin to boundary is |b|/||w||
  • Margin width = 2/||w|| (optimized by SVM training)

Logistic Regression (Polynomial)

Logistic Regression with polynomial features creates flexible non-linear decision boundaries.

Feature transformation

Convert (x,y) into polynomial features:

For degree 2: [1, x, y, xy, x², y²]

For degree 3: Add [x³, y³, x²y, xy², ...]

Linear combination

Calculate the logit:

z = w₀ + w₁x + w₂y + w₃xy + w₄x² + w₅y² + ...

Each w_i represents the coefficient for feature i

Sigmoid transformation

Convert to probability:

P(class A) = σ(z) = 1/(1+e^(-z))

Range is [0,1], representing probability of belonging to Class A

Decision rule

If P(class A) ≥ 0.5, classify as Class A

If P(class A) < 0.5, classify as Class B

Equivalent to: classify as A if z ≥ 0, otherwise B

Boundary characteristics

Decision boundary is where P = 0.5, or equivalently z = 0

With polynomial features, creates curved boundaries following equation:

w₀ + w₁x + w₂y + w₃xy + w₄x² + w₅y² + ... = 0