Computer Networks and the Web Stack

Mar 18, 2026
Computer Science

Every web application I've built sits on top of networking, and for years I treated that layer as magic. HTTP requests went out, responses came back, and I didn't think about what happened in between. Then I started debugging CORS errors, latency spikes, TLS certificate issues, and WebSocket disconnections—and realized that understanding the network layer isn't optional. It's the difference between cargo-culting configuration and actually knowing what you're configuring.

This post covers the networking fundamentals that matter for web engineers: the protocols, the layers, and the practical knowledge that helps you debug, optimize, and build.


The Layered Model

Network communication is organized in layers. Each layer handles one concern and provides a service to the layer above it. The two models you'll encounter are the OSI model (7 layers, academic) and the TCP/IP model (4 layers, practical). The TCP/IP model is what the internet actually uses.

When you make an HTTP request, your data passes down through these layers. Each layer wraps the data in its own header (encapsulation). On the receiving end, each layer unwraps its header and passes the payload up.

Why layers matter: They let you reason about problems at the right level. A "connection refused" error is at the transport layer (TCP). A "404 Not Found" is at the application layer (HTTP). A "DNS resolution failed" is at the application layer (DNS). Knowing which layer to look at saves hours of debugging.


IP — The Internet Layer

IP (Internet Protocol) is responsible for addressing and routing. Every device on the internet has an IP address, and IP figures out how to get a packet from one address to another through a series of routers.

IPv4 addresses are 32-bit numbers written as four octets: 192.168.1.1. There are roughly 4.3 billion possible addresses, which ran out years ago. IPv6 uses 128-bit addresses (2001:0db8:85a3::8a2e:0370:7334) and has enough addresses for every grain of sand on Earth.

Key properties of IP:

  • Best-effort delivery: Packets may be lost, duplicated, or arrive out of order. IP doesn't guarantee anything.
  • Routing: Each router examines the destination IP and forwards the packet toward its destination. The path may change between packets.
  • No connection state: IP is stateless. Each packet is independent.

DNS — Translating Names to Addresses

Humans use domain names (google.com). Machines use IP addresses. DNS (Domain Name System) bridges the gap.

When you type example.com in a browser:

  1. The browser checks its DNS cache.
  2. If not cached, the OS resolver checks its cache.
  3. If not cached, a query goes to the configured DNS resolver (your ISP's, or 8.8.8.8 for Google's).
  4. The resolver walks the DNS hierarchy: root servers → .com TLD servers → example.com authoritative servers.
  5. The authoritative server returns the IP address.
  6. The result is cached at every level (with a TTL).
# See the DNS resolution chain
dig example.com +trace
 
# Check the cached A record
dig example.com A

DNS failures are common. TTL-based caching means changes propagate slowly. A DNS misconfiguration can make your site unreachable even though the server is running fine. Always check DNS first when debugging "site is down" reports.


TCP — Reliable Transport

TCP (Transmission Control Protocol) provides reliable, ordered, connection-oriented communication on top of IP's unreliable delivery.

The Three-Way Handshake

Before data flows, TCP establishes a connection:

Client → Server:  SYN (sequence=100)
Server → Client:  SYN-ACK (sequence=300, ack=101)
Client → Server:  ACK (ack=301)
→ Connection established

This takes one round trip (plus a half). On a 50ms latency connection, the handshake alone costs ~75ms before any data is sent. For HTTPS, add another 1-2 round trips for the TLS handshake. This is why connection reuse (HTTP keep-alive, connection pooling) matters.

How TCP Guarantees Reliability

  • Sequence numbers: Each byte of data is numbered. The receiver knows if something is missing or out of order.
  • Acknowledgments: The receiver confirms what it's received. If the sender doesn't get an ACK within a timeout, it retransmits.
  • Flow control: The receiver advertises a window size—how much data it can buffer. The sender won't overwhelm a slow receiver.
  • Congestion control: TCP starts slow (slow start) and increases the sending rate until it detects congestion (packet loss), then backs off. This is why new connections are slow initially—they need to "ramp up."

TCP Performance Implications

  • Head-of-line blocking: If packet 3 is lost, packets 4 and 5 (which arrived fine) must wait until packet 3 is retransmitted and received. Everything stalls behind the lost packet.
  • Slow start: New connections start with a small congestion window. It takes several round trips to reach full throughput. This is why HTTP/2 multiplexing (one connection, many streams) outperforms HTTP/1.1's multiple connections.

UDP — Fast, Unreliable Transport

UDP (User Datagram Protocol) is the opposite of TCP: no connection, no reliability, no ordering. You send a packet and hope it arrives.

TCPUDP
ConnectionYes (handshake)No
ReliabilityGuaranteed deliveryBest-effort
OrderingGuaranteedNot guaranteed
OverheadHigh (headers, state, retransmission)Low
Use caseHTTP, email, file transferVideo streaming, gaming, DNS

Why use UDP? When speed matters more than perfection. In a video call, a dropped frame is better than a delayed one—by the time TCP retransmits the lost packet, the moment has passed. DNS uses UDP because queries are small (one packet) and latency matters more than guaranteed delivery (just retry).

QUIC, the protocol behind HTTP/3, is built on UDP. It implements its own reliability and congestion control in userspace, avoiding TCP's head-of-line blocking by allowing independent streams within a single connection.


HTTP — The Application Protocol

HTTP (Hypertext Transfer Protocol) is the language browsers and servers speak. Understanding it thoroughly is probably the single most useful networking skill for a web engineer.

Request/Response

GET /api/users?page=1 HTTP/1.1
Host: example.com
Accept: application/json
Authorization: Bearer eyJhbGciOi...
HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: max-age=3600
Content-Length: 245

[{"id": 1, "name": "Alice"}, ...]

Methods

MethodPurposeIdempotentSafe
GETRetrieve dataYesYes
POSTCreate resource / trigger actionNoNo
PUTReplace resource entirelyYesNo
PATCHPartial updateNo*No
DELETERemove resourceYesNo
HEADLike GET but no bodyYesYes
OPTIONSDiscover allowed methods (CORS preflight)YesYes

*PATCH can be idempotent depending on implementation.

Idempotent means calling it multiple times has the same effect as calling it once. GET, PUT, and DELETE are idempotent—retrying them is safe. POST is not—retrying a payment POST could charge twice, which is why idempotency keys exist.

Status Codes

The ranges tell you where to look for the problem:

  • 2xx: Success. 200 OK, 201 Created, 204 No Content.
  • 3xx: Redirection. 301 Moved Permanently, 304 Not Modified (cached).
  • 4xx: Client error. 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 429 Too Many Requests.
  • 5xx: Server error. 500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, 504 Gateway Timeout.

Caching

HTTP caching is the most impactful performance optimization for web applications.

  • Cache-Control: max-age=3600 — the response is fresh for 3600 seconds. No request is made during that time.
  • ETag — a hash of the content. On subsequent requests, the client sends If-None-Match: <etag>. If the content hasn't changed, the server responds with 304 Not Modified (no body).
  • Vary — tells caches that the response varies by certain request headers (e.g., Vary: Accept-Encoding means the gzipped and uncompressed versions are different cache entries).

CDNs are HTTP caches deployed globally. When you put a CDN in front of your API, it caches responses based on Cache-Control headers and serves them from edge locations near the user. A properly cached response is served in <50ms regardless of where your origin server is.


TLS/HTTPS

TLS (Transport Layer Security) encrypts the communication between client and server. HTTPS is HTTP over TLS.

The TLS Handshake

The certificate proves the server's identity. It's signed by a Certificate Authority (CA) that your browser trusts. The key exchange establishes a shared secret for symmetric encryption (the actual data encryption uses AES, which is fast, not RSA, which is slow).

TLS adds latency: 1-2 extra round trips for the handshake. TLS 1.3 reduces this to 1 round trip, and supports 0-RTT resumption for repeat connections (the client sends encrypted data with its first message).


CORS

Cross-Origin Resource Sharing is the browser's mechanism for controlling which origins can access your API. It's not a server security feature—it's a browser enforcement mechanism.

When your frontend at app.example.com makes a request to api.example.com, the browser enforces the same-origin policy. The server must explicitly allow the cross-origin request via response headers:

Access-Control-Allow-Origin: https://app.example.com
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Max-Age: 86400

Preflight requests: For "non-simple" requests (those with custom headers, methods other than GET/POST, or non-standard content types), the browser sends an OPTIONS request first to check permissions. This adds a round trip. Access-Control-Max-Age caches the preflight result to avoid repeating it.

The most common CORS mistake: setting Access-Control-Allow-Origin: * in development and wondering why cookies don't work. Wildcard origins don't allow credentials. You must specify the exact origin.


WebSockets

HTTP is request-response: the client asks, the server answers. WebSockets provide a persistent, bidirectional channel. Either side can send messages at any time after the connection is established.

The WebSocket connection starts as an HTTP request with an Upgrade header:

GET /chat HTTP/1.1
Host: example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade

After the upgrade, the connection is no longer HTTP—it's a raw bidirectional stream.

Use WebSockets for: real-time features (chat, live notifications, collaborative editing, live dashboards, multiplayer games). Don't use them for everything—HTTP with polling or Server-Sent Events (SSE) is simpler and sufficient for most "near-real-time" needs.

SSE (Server-Sent Events) is a simpler alternative for one-way server-to-client streaming. It uses a regular HTTP connection that stays open, with the server pushing events. SSE is what most AI chat interfaces use for streaming responses.


What Happens When You Type a URL

Putting it all together:

  1. DNS resolution: Browser resolves example.com to an IP address.
  2. TCP handshake: Client and server establish a TCP connection (SYN, SYN-ACK, ACK).
  3. TLS handshake: Client and server negotiate encryption (if HTTPS).
  4. HTTP request: Client sends GET / HTTP/1.1 with headers.
  5. Server processing: Server routes the request, queries databases, renders HTML.
  6. HTTP response: Server sends back status code, headers, and body.
  7. Rendering: Browser parses HTML, discovers CSS/JS/images, makes additional requests (steps 1-6 again for each), builds the DOM and CSSOM, paints pixels.

Each step has latency. Each step can fail. Understanding the full chain helps you optimize and debug:

  • Slow DNS? → Use DNS prefetching (<link rel="dns-prefetch">).
  • Slow TLS? → Ensure TLS 1.3, use session resumption.
  • Slow server? → Add caching, optimize queries.
  • Too many requests? → Bundle assets, use HTTP/2 multiplexing, lazy load.
  • Large payloads? → Compress (gzip/brotli), paginate, use CDN.

The Pragmatic Takeaway

You don't need to implement TCP from scratch. But you do need to understand that every fetch() call in your code triggers a cascade of protocol negotiations, packet transmissions, and potential failure points.

When a user reports "the site is slow," the answer might be at any layer: DNS is slow, the TCP connection is being re-established on every request, TLS negotiation is using outdated ciphers, the server is missing cache headers, the response payload is too large, or the CDN isn't configured for that route.

The engineers who resolve these issues quickly aren't the ones who memorize RFCs. They're the ones who have a mental model of the network stack and know which layer to investigate based on the symptoms. That mental model starts with understanding what each layer does and how they interact.