Programming Paradigms and Language Internals

Mar 18, 2026
Computer Science

For years I wrote JavaScript without thinking about what kind of programming I was actually doing. I'd mix objects with callbacks with pure functions, and it all worked. But when I started reading framework source code—React, Express, Redux—I noticed that the best codebases weren't just "writing JavaScript." They were making deliberate choices about how to structure logic, manage state, and compose behavior. Those choices have names, and understanding them changed how I write and reason about code.

This post is about the three major programming paradigms, how they show up in the languages I use every day, and what actually happens under the hood when your code runs.


The Three Paradigms

Every mainstream language supports some combination of three paradigms: procedural, object-oriented, and functional. They're not mutually exclusive—JavaScript, Python, and TypeScript let you mix all three—but each one has a different philosophy about how to organize a program.

Procedural Programming

Procedural programming is the most intuitive style. You write a sequence of instructions, and the computer executes them top to bottom. Variables hold state. Functions group related instructions. Control flow uses if, for, while.

let total = 0
const prices = [10, 20, 30, 15]
 
for (const price of prices) {
  if (price > 12) {
    total += price
  }
}
 
console.log(total) // 65

This is how most people start programming. It's direct, easy to trace, and works well for scripts and small programs. The problem is that as programs grow, procedural code becomes hard to organize. State is scattered across variables, and it's unclear which functions depend on which data.

Object-Oriented Programming

OOP organizes code around objects—bundles of data and the methods that operate on that data. The core ideas are encapsulation, inheritance, and polymorphism.

class BankAccount {
  private balance: number
 
  constructor(initial: number) {
    this.balance = initial
  }
 
  deposit(amount: number) {
    if (amount <= 0) throw new Error("Amount must be positive")
    this.balance += amount
  }
 
  withdraw(amount: number) {
    if (amount > this.balance) throw new Error("Insufficient funds")
    this.balance -= amount
  }
 
  getBalance() {
    return this.balance
  }
}
 
const account = new BankAccount(100)
account.deposit(50)
account.withdraw(30)
console.log(account.getBalance()) // 120

Encapsulation hides the balance field behind methods that enforce rules. You can't set the balance to -1000 from outside. The object protects its own invariants.

Inheritance lets you create specialized versions:

class SavingsAccount extends BankAccount {
  private interestRate: number
 
  constructor(initial: number, rate: number) {
    super(initial)
    this.interestRate = rate
  }
 
  applyInterest() {
    this.deposit(this.getBalance() * this.interestRate)
  }
}

Polymorphism means you can treat different objects uniformly if they share an interface. A function that accepts BankAccount also works with SavingsAccount.

OOP works well when you have entities with clear state and behavior—users, orders, connections. It struggles when behavior doesn't map neatly to a single object, or when inheritance hierarchies get deep and rigid.

Functional Programming

Functional programming treats computation as the evaluation of pure functions—functions that always return the same output for the same input and produce no side effects. Instead of mutating state, you create new values.

const prices = [10, 20, 30, 15]
 
const expensiveTotal = prices
  .filter((price) => price > 12)
  .reduce((sum, price) => sum + price, 0)
 
console.log(expensiveTotal) // 65

No mutation, no loop counter, no intermediate variable being reassigned. Each step takes data in and returns data out. The result is the same as the procedural version, but the code is more composable and easier to reason about in isolation.

Key functional concepts:

  • Pure functions: No side effects, deterministic output.
  • Immutability: Don't mutate data—create new copies.
  • Higher-order functions: Functions that take or return other functions (map, filter, reduce).
  • Composition: Build complex behavior by combining simple functions.
const double = (x: number) => x * 2
const addOne = (x: number) => x + 1
 
const doubleThenAdd = (x: number) => addOne(double(x))
doubleThenAdd(5) // 11

React embraced functional programming deliberately. Components are functions. State is managed through hooks that return new values rather than mutating objects. The entire rendering model is based on the idea that UI is a pure function of state: UI = f(state).


How Paradigms Show Up in Practice

In real codebases, you mix paradigms constantly. The skill is knowing when each one fits.

SituationBest fitWhy
Modeling entities with state (users, orders)OOPEncapsulation protects invariants
Data transformations (filtering, mapping)FunctionalPure functions are composable and testable
Scripts, CLIs, one-off tasksProceduralDirect, low overhead
React componentsFunctionalUI as a function of state
Express middlewareFunctional(req, res, next) => {} is function composition
Database models / ORMsOOPObjects map to rows with behavior

The worst codebases I've seen force everything into one paradigm. OOP zealots create UserManagerFactoryService classes for what should be a function. Functional purists avoid all mutation even when an in-place sort is the right call. Pragmatic code uses the right tool for the context.


Compilation vs Interpretation

Understanding how your code actually runs helps you debug, optimize, and choose the right tools.

Compiled languages (C, Go, Rust) are transformed entirely into machine code before execution. The compiler reads your source, checks types, optimizes, and produces a binary. At runtime, the CPU executes that binary directly. Compilation is slow; execution is fast.

Interpreted languages (Python, early JavaScript) are read and executed line by line by an interpreter. No separate compilation step—you write code and run it immediately. Execution is slower because the interpreter does work at runtime that a compiler does ahead of time.

Modern reality is more nuanced. Most languages today sit somewhere in between:

JavaScript engines (V8, JavaScriptCore, SpiderMonkey) all use Just-In-Time (JIT) compilation. The engine first parses your code into an AST, compiles it to bytecode, and interprets the bytecode. As functions get called repeatedly ("hot" functions), the JIT compiler kicks in and compiles them to optimized machine code.

V8's pipeline:

  1. Ignition (interpreter) — executes bytecode immediately. Fast startup.
  2. TurboFan (optimizing compiler) — compiles hot functions to machine code using type feedback. If assumptions are violated (e.g., a variable that was always a number becomes a string), TurboFan deoptimizes back to bytecode.

This is why consistent types matter in JavaScript performance:

// V8 can optimize this — add() always receives numbers
function add(a, b) {
  return a + b
}
add(1, 2)
add(3, 4)
add(5, 6)
 
// This forces deoptimization — suddenly a string
add("hello", "world")

Python uses a similar but simpler model: CPython compiles to bytecode (.pyc files) and interprets it. There's no JIT in standard CPython, which is why Python is slower for compute-heavy tasks. PyPy, an alternative Python runtime, adds JIT compilation and can be significantly faster.


Stack vs Heap

Every program uses two regions of memory: the stack and the heap. Understanding the difference explains why some operations are fast and some aren't, why certain bugs happen, and how garbage collection works.

The stack is a LIFO (last-in, first-out) structure that stores function call frames. When a function is called, its local variables and return address are pushed onto the stack. When it returns, they're popped off. Stack allocation is essentially free—it's just moving a pointer.

The heap is a large, unstructured pool of memory for objects that outlive a single function call. When you create an object, array, or closure, it's allocated on the heap. Heap allocation is more expensive because the runtime must find free space and track what's still in use.

function calculate() {
  const x = 42 // stack — primitive, local scope
  const y = x + 8 // stack — primitive, local scope
  const user = {
    // heap — object, may be referenced elsewhere
    name: "Alice",
    score: y,
  }
  return user // reference to heap object survives the function
}

When calculate() returns, x and y are gone (popped off the stack). But user lives on the heap because the caller still has a reference to it.


Garbage Collection

In languages like JavaScript, Python, and Java, you don't manually free memory. The garbage collector (GC) automatically reclaims heap memory that's no longer reachable.

V8 uses a generational garbage collector based on the observation that most objects die young:

  • Young generation (nursery): New objects are allocated here. A fast "minor GC" (Scavenge) runs frequently, copying surviving objects to an older space.
  • Old generation: Objects that survive multiple minor GCs are promoted here. A slower "major GC" (Mark-Sweep-Compact) runs less frequently.

The practical implications:

  1. Short-lived objects are cheap. Creating temporary objects in a function is fine—the young generation GC handles them efficiently.
  2. Long-lived objects should be stable. Objects that persist (caches, global state) end up in the old generation. Constantly creating and discarding long-lived objects triggers expensive major GCs.
  3. Memory leaks happen when references persist unintentionally. A forgotten event listener, a closure that captures a large scope, a growing array that's never trimmed—these keep objects reachable and prevent collection.
// Memory leak: listener is never removed, keeps handler (and its closure) alive
function setup() {
  const hugeData = new Array(1_000_000).fill("x")
 
  window.addEventListener("scroll", () => {
    console.log(hugeData.length)
  })
}

The closure captures hugeData. As long as the scroll listener exists, hugeData can't be garbage collected. Multiply this across components that mount and unmount, and you get the slow memory creep that plagues long-running SPAs.


Type Systems

A type system is a set of rules that assigns types to values and checks that operations are valid for those types.

Dynamically typed (JavaScript, Python): Types are checked at runtime. A variable can hold any type at any time. Errors surface when the code runs.

let x = 42
x = "hello" // fine in JavaScript
x.toFixed(2) // TypeError at runtime — string doesn't have toFixed

Statically typed (TypeScript, Go, Rust, Java): Types are checked at compile time. The compiler rejects programs with type errors before they run.

let x: number = 42
x = "hello" // compile error — Type 'string' is not assignable to type 'number'

TypeScript is interesting because it's a statically typed layer on top of a dynamically typed language. The types exist only at compile time—they're erased before the code runs in the JavaScript engine. This means TypeScript catches errors early without changing the runtime behavior.

The tradeoff:

AspectDynamic typingStatic typing
Speed of writingFaster initiallySlower initially
Refactoring safetyLow — errors found at runtimeHigh — compiler catches mismatches
IDE supportLimitedAutocomplete, go-to-definition, inline errors
Runtime overheadType checks at runtimeZero — types are erased

For large codebases with multiple contributors, static typing pays for itself quickly. The upfront cost of annotating types is small compared to the cost of debugging a TypeError: Cannot read properties of undefined in production.


The Pragmatic Takeaway

Programming paradigms aren't religions—they're tools. OOP works when you're modeling entities with state and behavior. Functional works when you're transforming data. Procedural works when you need directness and simplicity. The best code uses all three where they fit.

Understanding what happens below your code—JIT compilation, stack vs heap, garbage collection, type systems—isn't about writing "closer to the metal." It's about making informed decisions. When you know that V8 deoptimizes on type changes, you write more consistent code. When you know that closures capture their enclosing scope, you think twice about what you close over. When you know that the heap is where memory leaks hide, you know where to look.

The engineers I learn from most don't identify as "OOP developers" or "functional programmers." They identify as problem solvers who pick the right paradigm for the problem. That flexibility comes from understanding the foundations, not from loyalty to a style.