Analysis of Small-World Networks

Recently, I watched a video on small-world networks by Veritasium. This script explores concepts related to small-world networks, inspired by Veritasium. It is divided into three main sections:

  1. Degrees of Separation: Demonstrating the impact of shortcuts on network path length (Random vs. Ring vs. Watts-Strogatz).
  2. SIR Models: Simulating disease spread on these networks.
  3. Prisoner's Dilemma: Simulating game theory dynamics on these networks.
In [481]:
# ##############################################################################
# ## 0. Imports and Setup
# ##############################################################################

import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
import time
from collections import Counter, defaultdict
from typing import Callable, List, Tuple
import random
import itertools
from matplotlib.animation import FuncAnimation
from IPython.display import HTML

# Set a consistent plot style
plt.style.use("seaborn-v0_8-darkgrid")
In [482]:
# ##############################################################################
# ## General Helper Functions
# ##############################################################################


def create_ws_graph(N: int, k: int, p: float, seed: int = 42) -> nx.Graph:
    """
    Creates a Watts-Strogatz small-world graph.
    Ensures k is even as required by the nx.watts_strogatz_graph function.
    """
    k = int(k)
    if k % 2 != 0:
        k += 1  # Ensure k is even
    return nx.watts_strogatz_graph(n=int(N), k=k, p=float(p), seed=seed)


def get_local_and_shortcut_edges(
    G: nx.Graph, shortcut_prob: float, seed: int = 42
) -> Tuple[List[Tuple[int, int]], List[Tuple[int, int]]]:
    random.seed(seed)
    local_edges = []
    shortcut_edges = []
    for u, v in G.edges():
        if abs(u - v) == 1 or abs(u - v) == G.number_of_nodes() - 1:
            local_edges.append((u, v))
        else:
            if random.random() < shortcut_prob:
                shortcut_edges.append((u, v))
            else:
                local_edges.append((u, v))
    return local_edges, shortcut_edges


def draw_network_states(
    G, state_vector, ax, title="", local_edges=None, shortcut_edges=None
):
    """
    Helper function to draw a single frame of a network animation.
    """
    ax.clear()

    pos = nx.circular_layout(G)
    all_edges = list(G.edges())

    # Map states to colors
    # Map states to colors: 0=blue (susceptible), 1=red (infected), 2=gray (recovered)
    color_map = {0: "#1f77b4", 1: "#d62728", 2: "#7f7f7f"}
    colors = [color_map.get(s, "#000000") for s in state_vector]

    # Draw edges
    if local_edges is not None:
        nx.draw_networkx_edges(
            G,
            pos,
            edgelist=local_edges,
            width=0.5,
            alpha=0.7,
            edge_color="#9aa0a6",
            ax=ax,
            # connectionstyle="arc3,rad=0.1",
            # arrows=True,  # <--- FORCE FancyArrowPatch
            # arrowstyle="-",  # <--- HIDE the arrowhead
        )
    if shortcut_edges is not None:
        nx.draw_networkx_edges(
            G,
            pos,
            edgelist=shortcut_edges,
            width=0.9,
            alpha=0.9,
            edge_color="#ff9900",
            ax=ax,
        )

    # Draw all edges if specific types aren't provided
    if local_edges is None and shortcut_edges is None:
        nx.draw_networkx_edges(G, pos, alpha=0.2, ax=ax, edgelist=all_edges)

    # Draw nodes
    nx.draw_networkx_nodes(
        G,
        pos,
        node_size=30,
        node_color=colors,
        alpha=0.8,
        linewidths=0.0,
        ax=ax,
    )

    ax.set_title(title)
    ax.axis("off")
    ax.set_xlim(-1.1, 1.1)
    ax.set_ylim(-1.1, 1.1)


def draw_graph_with_all_neighbors(G):
    # Use circular layout for the graph
    pos = nx.circular_layout(G)
    fig, ax = plt.subplots(figsize=(6, 6))  # Create a figure and axes
    nx.draw(
        G,
        pos,
        with_labels=True,
        node_color="#1f77b4",
        edge_color="#9aa0a6",
        node_size=100,
        font_color="white",
        ax=ax,  # Pass the axes object to nx.draw
    )
    ax.set_title("Graph with All Neighbors")  # Add a title to the plot
    ax.axis("off")  # Turn off the axis
    plt.show()

1. Degrees of Separation¶

1.1. Naive Model (Random Connections)¶

This model assumes everyone is equally likely to be connected with everyone else, randomly.

In [483]:
# %%
# number of people you could be connected to via n degress of separation
degrees = np.arange(1, 6)
world_population = 1_000_000  # Using 1M for this model, not 8B


def degrees_of_separation(n, avg_connections=150):
    if n < 1:
        return 0
    return int(avg_connections) ** int(n)  # Python big ints


connections = []
for i in degrees:
    con = degrees_of_separation(i, avg_connections=100)
    print(f"Degrees of Separation: {i}, Connections: {con}")
    connections.append(con)

plt.figure()
plt.plot(degrees, connections, marker="o")
plt.axhline(
    y=world_population,  # Corrected to match variable
    color="r",
    linestyle="--",
    label=f"World Population (~{world_population:,})",
)
plt.legend()
plt.yscale("log")
plt.xlabel("Degrees of Separation (n)")
plt.ylabel("Number of Connections (log scale)")
plt.title("Naive Connections vs Degrees of Separation")
plt.show()
Degrees of Separation: 1, Connections: 100
Degrees of Separation: 2, Connections: 10000
Degrees of Separation: 3, Connections: 1000000
Degrees of Separation: 4, Connections: 100000000
Degrees of Separation: 5, Connections: 10000000000
No description has been provided for this image

This is why the 6 degrees of separation theory works, but this assumes that everyone is equally likely to be connected with everyone else, randomly. In reality, some people have far more connections than others, while also being geographically clustered in social networks, etc.

1.2. Ring Lattice Model (High Clustering, No Shortcuts)¶

Lets assume everyone is set up in a straight line (or ring), and each person is connected to 100 of their nearest neighbours. How many people are within n degrees of separation?

The world is very far from random. People naturally cluster geographically. Most of the people you know live close to you, and they also have a higher probability of knowing each other. If you calculate the fraction of people you know who also know each other, that is a measure of the clustering in the network.

So let's try a model with a high degree of clustering. Imagine all people are arranged into a circle, and say each person knows the 100 people closest to them (50 to the left and 50 to the right). In this case, the furthest person you can connect to in one step is 50 people away. Connecting to someone on the other side of the planet would take millions of steps.

In [484]:
def linear_arrangement_degrees(n_degrees: int, N: int, k: int = 100) -> int:
    """
    Nodes are on a ring. Each node connects to k nearest neighbors (k/2 each side).
    Unique nodes reachable within n steps from one start node.
    """
    if n_degrees < 0:
        return 0
    # Each step, you can reach k *more* people (k/2 on each side)
    # Step 0: 1 (self)
    # Step 1: 1 + k
    # Step 2: 1 + k + k = 1 + 2k
    # Step n: 1 + n*k
    return min(1 + int(k) * int(n_degrees), int(N))


# Example compare vs naive branching
k = 100
ring_reach = np.array(
    [linear_arrangement_degrees(d, world_population, k) for d in degrees]
)

plt.figure()
plt.plot(degrees, ring_reach, marker="o", label="Ring lattice reach (k=100)")
plt.yscale("log")
plt.xlabel("Degrees of Separation (n)")
plt.ylabel("Number of Reachable People (log scale)")
plt.axhline(
    world_population, ls="--", label=f"World pop (~{world_population:,})"
)
plt.legend()
plt.title("Reach vs Degrees of Separation (Ring Model)")
plt.show()
No description has been provided for this image

1.3. Watts-Strogatz Model (Clustering + Shortcuts)¶

This is the paradox of 6 degrees of separation. In a random network, everyone is connected through short chains. In a highly clustered network (like the ring), chains are very long.

¶

Real social networks are both highly clustered and have short paths.

¶

The trick is shortcuts. In real life, people do not just know their nearest neighbours. Some connections span long distances, acting as shortcuts that dramatically reduce the number of steps needed.

In [485]:
from collections import Counter
import networkx as nx
import time


# Build one WS graph per shortcut fraction, then compute cumulative reach by step via BFS shells.
# BFS stands for breadth-first search.
def _ws_reach_curve(
    max_degrees,
    connections_per_person=100,
    shortcut_fraction=0.01,
    N=100_000,
    seeds=(0, 1, 2),
):
    """
    Return list of average cumulative reach within 1..max_degrees steps
    for a Watts–Strogatz graph.
    """
    # Use the helper function to build the graph
    G = create_ws_graph(
        N=N, k=connections_per_person, p=shortcut_fraction, seed=42
    )

    curves = []
    L_runs = []
    for s in seeds:
        dist = nx.single_source_shortest_path_length(
            G, source=(s % N), cutoff=max_degrees
        )
        # count nodes at each exact distance
        shell = Counter(dist.values())  # {0:1, 1:..., 2:..., ...}
        cum = []
        total = 0
        for n in range(1, max_degrees + 1):
            total += shell.get(n, 0)
            cum.append(min(total, N))  # Cumulative count
        curves.append(np.array(cum, dtype=np.int64))

        # Calculate avg shortest path length for this seed's component
        if nx.is_connected(G):
            L = nx.average_shortest_path_length(G)
        else:
            C = max(nx.connected_components(G), key=len)
            if s in C:
                L = nx.average_shortest_path_length(G.subgraph(C))
            else:
                L = float("inf")  # Seed not in giant component
        L_runs.append(L)

    return np.mean(curves, axis=0), np.mean(L_runs)


# --- Run Simulation for Section 1.3 ---
N_model = 10_000  # Toy population for the network model
k_model = 100
shortcut_fractions = [0.0001, 0.001, 0.01, 0.1]
degrees_model = np.arange(1, 6)  # Max degrees to check

time0 = time.time()
connections_with_shortcuts_output = {}
avg_path_length_output = {}

for p in shortcut_fractions:
    print(f"Calculating for shortcut fraction p={p}...")
    connections_with_shortcuts, avg_path_length = _ws_reach_curve(
        max_degrees=int(degrees_model.max()),
        connections_per_person=k_model,
        shortcut_fraction=p,
        N=N_model,
        seeds=(0, 1, 2),  # Average over a few starting nodes
    )
    connections_with_shortcuts_output[p] = connections_with_shortcuts
    avg_path_length_output[p] = avg_path_length
print(f"Total time: {time.time() - time0:.2f} seconds")
Calculating for shortcut fraction p=0.0001...
Calculating for shortcut fraction p=0.001...
Calculating for shortcut fraction p=0.01...
Calculating for shortcut fraction p=0.1...
Total time: 9478.62 seconds
In [497]:
# --- Plotting for Section 1.3 ---

# 1. Plot Reach vs. Degrees
plt.figure(figsize=(10, 5))
for p in shortcut_fractions:
    plt.plot(
        degrees_model,
        connections_with_shortcuts_output[p],
        marker="o",
        label=f"Shortcut Fraction: {p:g}",
    )

# Add the ring model for comparison (scaled to this model's N)
ring_model_reach = np.array(
    [linear_arrangement_degrees(d, N_model, k_model) for d in degrees_model]
)
plt.plot(
    degrees_model,
    ring_model_reach,
    marker="s",
    color="gray",
    ls=":",
    label="Ring lattice (p=0)",
)

plt.axhline(
    y=N_model,
    color="r",
    linestyle="--",
    label=f"Model Population (~{N_model:,})",  # Corrected label
)
plt.legend()
plt.yscale("log")
plt.xlabel("Degrees of Separation (n)")
plt.ylabel("Number of Connections (log scale)")
plt.title("Connections vs Degrees with Shortcuts (Watts–Strogatz)")
plt.show()


# 2. Plot Avg Path Length vs. Shortcut Fraction
plt.figure(figsize=(10, 5))
ps_array = np.array(shortcut_fractions, dtype=float)
avg_lengths = np.array(
    [avg_path_length_output[p] for p in ps_array], dtype=float
)

plt.plot(ps_array, avg_lengths, marker="o")

# # Optional: Power law fit
# try:
#     log_ps = np.log(ps_array)
#     log_lengths = np.log(avg_lengths)
#     B_power, log_A = np.polyfit(log_ps, log_lengths, 1)
#     A_power = np.exp(log_A)

#     x_fit = np.logspace(
#         np.log10(ps_array.min()), np.log10(ps_array.max()), 100
#     )
#     y_fit = A_power * x_fit**B_power
#     plt.plot(
#         x_fit,
#         y_fit,
#         color="black",
#         linestyle="--",
#         label=f"Power law fit: {A_power:.1f}·x^{B_power:.2f}",
#     )
# except Exception as e:
#     print(f"Could not compute power law fit: {e}")


plt.xscale("log")
plt.xlabel("% of shortcuts (log scale)")
plt.ylabel("Degrees of separation (avg shortest path)")
plt.title("Small-world effect: path length vs shortcuts")
plt.grid(True, which="both", ls="--")
plt.legend()
plt.show()
No description has been provided for this image
C:\Users\johnd\AppData\Local\Temp\ipykernel_34040\3191736573.py:76: UserWarning: No artists with labels found to put in legend.  Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
  plt.legend()
No description has been provided for this image

##############################################################################

2. SIR Disease Spread Models¶

##############################################################################

Now lets move onto the 'Strength of Weak Ties' concept, which can be demonstrated by how diseases spread. Shortcuts (weak ties) dramatically change the dynamics.

In [487]:
# --- Disease spread on small-world networks (SIR) ---


def sir_simulation(
    G,
    beta: float,
    gamma: float,
    steps: int,
    initial_infected: int = 10,
    rng_seed: int = 0,
):
    """
    Run a discrete-time SIR simulation on a given graph G.

    beta: per-contact infection prob per step
    gamma: per-step recovery prob
    returns dict with arrays S,I,R and final attack rate
    """
    rng = np.random.default_rng(rng_seed)
    nodes = np.array(G.nodes())
    N = len(nodes)
    if N == 0:
        return {
            "S": [0],
            "I": [0],
            "R": [0],
            "attack_rate": 0,
            "peak_I": 0,
            "time_to_peak": 0,
        }

    # states: 0=S, 1=I, 2=R
    state = np.zeros(N, dtype=np.int8)

    # Ensure initial_infected is not larger than N
    initial_infected = min(initial_infected, N)
    if initial_infected > 0:
        infected_idx = rng.choice(N, size=initial_infected, replace=False)
        state[infected_idx] = 1

    S, I, R = (
        [np.count_nonzero(state == 0)],
        [np.count_nonzero(state == 1)],
        [np.count_nonzero(state == 2)],
    )

    # adjacency list for speed
    nbrs = {
        u: np.fromiter(G.neighbors(u), dtype=int, count=G.degree(u))
        for u in G.nodes()
    }

    for t in range(steps):
        if I[-1] == 0:  # epidemic ended
            break

        infected_nodes = np.where(state == 1)[0]

        # 1. Infections
        new_infected = []
        if infected_nodes.size > 0:
            for u in infected_nodes:
                if u not in nbrs:
                    continue  # Node might have been removed
                sus_nbrs = nbrs[u][state[nbrs[u]] == 0]
                if sus_nbrs.size:
                    # each contact independent Bernoulli(beta)
                    trials = rng.random(sus_nbrs.size) < beta
                    if trials.any():
                        new_infected.append(sus_nbrs[trials])

        if new_infected:
            new_infected_flat = np.unique(np.concatenate(new_infected))
            state[new_infected_flat] = 1

        # 2. Recoveries
        if infected_nodes.size > 0:
            rec_trials = rng.random(infected_nodes.size) < gamma
            if rec_trials.any():
                state[infected_nodes[rec_trials]] = 2

        S.append(np.count_nonzero(state == 0))
        I.append(np.count_nonzero(state == 1))
        R.append(np.count_nonzero(state == 2))

    # Pad results if epidemic ended early
    while len(I) < steps + 1:
        S.append(S[-1])
        I.append(I[-1])
        R.append(R[-1])

    return {
        "S": np.array(S),
        "I": np.array(I),
        "R": np.array(R),
        "attack_rate": R[-1] / N,
        "peak_I": np.max(I),
        "time_to_peak": int(np.argmax(I)),
    }


# %%
# --- Run Simulation for Section 2 ---
N_sir = 5_000  # Smaller N for faster simulation
k_sir = 20  # average local degree
steps_sir = 100  # time steps
beta_sir = 0.1  # infection probability per contact per step
gamma_sir = 0.1  # recovery probability per step (R0 approx beta/gamma * k)
probs_sir = [0.0, 0.0001, 0.001, 0.01, 0.05, 0.1]  # shortcut fractions

print("Running SIR simulations...")
sir_results = {}
for p in probs_sir:
    print(f"  p={p:g}")
    # Use the helper function to build the graph
    G = create_ws_graph(N=N_sir, k=k_sir, p=p, seed=42)
    out = sir_simulation(
        G,
        beta=beta_sir,
        gamma=gamma_sir,
        steps=steps_sir,
        initial_infected=10,
        rng_seed=1,
    )
    sir_results[p] = out

print("SIR simulations complete.")

# For intuition:
R0_approx = (beta_sir / gamma_sir) * k_sir
print(f"Approx R0 ≈ {R0_approx:.2f}")
Running SIR simulations...
  p=0
  p=0.0001
  p=0.001
  p=0.01
  p=0.05
  p=0.1
SIR simulations complete.
Approx R0 ≈ 20.00
In [502]:
# --- Plotting for Section 2 ---

# 1. Plot infections over time
plt.figure(figsize=(10, 5))
for p in probs_sir:
    out = sir_results[p]
    t = np.arange(len(out["I"]))
    plt.plot(t, out["I"], label=f"p={p:g}")
plt.xlabel("Time step")
plt.ylabel("Infected")
plt.title(
    f"SIR: Infected vs Time (N={N_sir}, k={k_sir}, β={beta_sir}, γ={gamma_sir})"
)
plt.legend()
plt.grid(True, which="both", ls="--")
# plt.tight_layout()
plt.show()

# 2. Plot final attack rate vs shortcut fraction
attack_rates = [sir_results[p]["attack_rate"] for p in probs_sir]
peak_I = [sir_results[p]["peak_I"] for p in probs_sir]
time_to_peak = [sir_results[p]["time_to_peak"] for p in probs_sir]
x_probs = np.array(probs_sir)

# Avoid p=0 in log scale plot
x_plot = np.array([max(p, 1e-6) for p in probs_sir])  # 1e-6 as a proxy for 0

plt.figure(figsize=(10, 5))
plt.plot(x_plot, attack_rates, marker="o")
plt.xscale("log")
plt.ylim(0.5, 1.05)
plt.xlabel("% of shortcuts (log scale, 0 plotted at 1e-6)")
plt.ylabel("Final attack rate (Fraction infected)")
plt.title("SIR: Final Attack Rate vs Shortcuts")
plt.grid(True, which="both", ls="--")
# plt.tight_layout()
plt.show()

# 3. Optional: peak size and time to peak
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(7, 7), sharex=True)

ax1.plot(x_plot, peak_I, marker="o", color="tab:red")
ax1.set_xscale("log")
ax1.set_ylabel("Peak Infected")
ax1.set_title("SIR: Peak Infections and Time to Peak vs Shortcuts")
ax1.grid(True, which="both", ls="--")

ax2.plot(x_plot, time_to_peak, marker="o", color="tab:blue")
ax2.set_xscale("log")
ax2.set_xlabel("% of shortcuts (log scale, 0 plotted at 1e-6)")
ax2.set_ylabel("Time to Peak")
ax2.grid(True, which="both", ls="--")

plt.tight_layout()
plt.show()
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
In [489]:
# --- SIR Snapshot Visualization ---


def sir_with_snapshots(
    G,
    beta: float,
    gamma: float,
    steps: int,
    initial_infected: int = 10,
    rng_seed: int = 0,
    snapshot_every: int = 10,
    clustered_initial_infection: bool = False,
):
    """
    Run SIR sim and capture (time, state_array) snapshots.
    """
    rng = np.random.default_rng(rng_seed)
    N = G.number_of_nodes()
    nodes = np.array(G.nodes())
    node_map = {node_val: idx for idx, node_val in enumerate(nodes)}

    state = np.zeros(N, dtype=np.int8)  # 0=S, 1=I, 2=R
    initial_infected = min(initial_infected, N)

    if initial_infected > 0:
        if not clustered_initial_infection:
            state[rng.choice(N, size=initial_infected, replace=False)] = 1
        else:
            position = rng.choice(N, size=1, replace=False)
            # either side of position is infected
            half = initial_infected // 2
            left_indices = (position[0] - half + np.arange(half)) % N
            right_indices = (
                position[0] + np.arange(initial_infected - half)
            ) % N
            state[left_indices] = 1
            state[right_indices] = 1

    # adjacency as arrays for speed
    nbrs = {
        i: np.array([node_map[n] for n in G.neighbors(nodes[i])], dtype=int)
        for i in range(N)
    }
    snapshots = [(0, state.copy())]

    for t in range(1, steps + 1):

        # print(f"Step {t}...")

        infected_nodes = np.where(state == 1)[0]
        if infected_nodes.size == 0:
            break  # Epidemic over

        # 1. Infections
        cand = []
        for u in infected_nodes:
            # print(f"Processing infected node {u}...")
            sus = nbrs[u][state[nbrs[u]] == 0]
            # print("Neighbors of node:", nbrs[u], sus, sus.size)
            # print(beta, rng.random(sus.size), rng.random(sus.size)<beta, sus[rng.random(sus.size) < beta])
            if sus.size:
                cand.append(sus[rng.random(sus.size) < beta])
        if cand:
            state[np.unique(np.concatenate(cand))] = 1

        # 2. Recoveries
        rec = infected_nodes[rng.random(infected_nodes.size) < gamma]
        state[rec] = 2

        if t % snapshot_every == 0 or t == steps:
            snapshots.append((t, state.copy()))

    return snapshots  # list of (time, state_array)


def plot_snapshots_circular(
    G,
    snapshots,
    k: int,
    sample_size: int = 1200,
    layout_seed: int = 0,
    model="SIR",
):
    """
    Animate SIR snapshots on a circular layout.
    """
    N_orig = G.number_of_nodes()
    k2 = max(1, k // 2)

    # --- Setup Graph and Layout ---
    _, s0 = snapshots[0]
    infected0 = np.where(s0 == 1)[0]
    anchor = int(infected0[0]) if infected0.size else 0

    # Create a subgraph view
    size = min(sample_size, N_orig)
    half = size // 2
    # Node indices from original graph
    original_nodes = list(G.nodes())
    sample_node_indices = [
        (anchor + i) % N_orig for i in range(-half, size - half)
    ]
    sample_nodes = [
        original_nodes[i] for i in sample_node_indices
    ]  # change this to original_nodes if you want all nodes

    H = G.subgraph(sample_nodes).copy()

    # Map original node values to their index in the snapshot array
    node_to_state_idx = {
        node_val: idx for idx, node_val in enumerate(sample_nodes)
    }

    # Classify edges
    local_edges, shortcut_edges = [], []
    for u, v in H.edges():
        idx_u, idx_v = node_to_state_idx[u], node_to_state_idx[v]
        d = abs(idx_u - idx_v) % N_orig
        d = min(d, N_orig - d)
        (local_edges if d <= k2 else shortcut_edges).append((u, v))

    # --- Animation ---
    fig, ax = plt.subplots(figsize=(6, 6))
    ax.set_aspect("equal", adjustable="box")

    def update(frame_idx):
        t, state = snapshots[frame_idx]

        # Get the state for the nodes *in our subgraph*
        subgraph_state = np.array(
            [state[node_to_state_idx[n]] for n in sample_nodes], dtype=np.int8
        )

        if model == "SIR":
            title = f"SIR snapshot at t={t} | locals=gray, shortcuts=orange (not all nodes shown)"
        else:
            title = f"Prisoner's Dilemma snapshot at t={t}"
        draw_network_states(
            H, subgraph_state, ax, title, local_edges, shortcut_edges
        )

    anim = FuncAnimation(
        fig, update, frames=len(snapshots), interval=600, repeat=True
    )
    plt.close(fig)
    return HTML(anim.to_jshtml())


# %%
# --- Run and display SIR animation ---
N_anim = 1000
k_anim = 10
p_anim = 0.1  # shortcut fraction
seed_anim = 42
beta_anim = 0.04  # infection prob per contact per step
gamma_anim = 0.10  # recovery prob per infected per step
steps_anim = 100

print("Building graph for animation...")
G_anim = create_ws_graph(N=N_anim, k=k_anim, p=p_anim, seed=seed_anim)

# draw_graph_with_all_neighbors(G_anim)

print("Running SIR simulation for snapshots...")
snapshots = sir_with_snapshots(
    G_anim,
    beta=beta_anim,
    gamma=gamma_anim,
    steps=steps_anim,
    initial_infected=10,
    rng_seed=1,
    snapshot_every=1,  # Capture every step
    clustered_initial_infection=True,
)

print("Generating animation...")
# Display the animation
# Note: k=50 is passed to plot_snapshots_circular for edge *classification*,
# which is different from k_anim used to *build* the graph. Let's use k_anim.
plot_snapshots_circular(
    G_anim, snapshots, k=k_anim, sample_size=200, layout_seed=0, model="SIR"
)
Building graph for animation...
Running SIR simulation for snapshots...
Generating animation...
Out[489]:
No description has been provided for this image

##############################################################################

3. Prisoner's Dilemma on Networks¶

##############################################################################

Now lets look into how this plays out in game theory simulations with the prisoner's dilemma.

3.1. 1-v-1 Iterated Prisoner's Dilemma (Baseline)¶

First, let's define the basic game and strategies.

In [503]:
# Actions: 1=Cooperate, 0=Defect
Action = int
# Strategy: f(my_history, opp_history, t) -> Action
Strategy = Callable[[List[Action], List[Action], int], Action]

# Payoffs: (R, S, T, P) = (CC, CD, DC, DD)
DEFAULT_PAYOFFS = (3, 0, 5, 1)  # R=3, S=0, T=5, P=1


def iterated_pd(
    strat_a: Strategy,
    strat_b: Strategy,
    rounds: int = 200,
    payoffs: Tuple[int, int, int, int] = DEFAULT_PAYOFFS,
    seed: int | None = None,
) -> Tuple[List[Action], List[Action], float, float]:
    """
    Play Strategy A vs Strategy B for 'rounds'.
    Returns (hist_a, hist_b, score_a, score_b).
    """
    if seed is not None:
        random.seed(seed)

    R, S, T, P = payoffs
    hist_a: List[Action] = []
    hist_b: List[Action] = []
    score_a = score_b = 0.0

    for t in range(rounds):
        action_a = strat_a(hist_a, hist_b, t)
        action_b = strat_b(hist_b, hist_a, t)  # Symmetric signature

        hist_a.append(action_a)
        hist_b.append(action_b)

        if action_a == 1 and action_b == 1:
            score_a += R
            score_b += R
        elif action_a == 1 and action_b == 0:
            score_a += S
            score_b += T
        elif action_a == 0 and action_b == 1:
            score_a += T
            score_b += S
        else:  # 0, 0
            score_a += P
            score_b += P

    return hist_a, hist_b, score_a, score_b


# --- Example strategies ---
def always_c(*_):
    return 1  # Cooperate


def always_d(*_):
    return 0  # Defect


def tit_for_tat(my, opp, t):
    return random.choice([0, 1]) if t == 0 else opp[-1]


def random_choice(my, opp, t):
    return random.choice([0, 1])


def grim_trigger(my, opp, t):
    # Cooperate until opponent defects once, then defect forever
    return 1 if (t == 0 or all(x == 1 for x in opp)) else 0


def win_stay_lose_shift(my, opp, t):
    if t == 0:
        return random.choice([0, 1])  # Start by cooperating
    # Get my last payoff
    a, b = my[-1], opp[-1]
    R, S, T, P = DEFAULT_PAYOFFS
    last_payoff = (
        R
        if (a, b) == (1, 1)
        else S if (a, b) == (1, 0) else T if (a, b) == (0, 1) else P
    )
    # Stay if payoff was good (R or T), shift otherwise
    return a if last_payoff in (R, T) else 1 - a


# --- Run 1v1 matchups ---
strategies = [
    always_c,
    always_d,
    tit_for_tat,
    random_choice,
    grim_trigger,
    win_stay_lose_shift,
]
combinations = list(itertools.combinations(strategies, 2))

print("--- 1-v-1 Prisoner's Dilemma Matchups ---")
for strat_a, strat_b in combinations:
    _, _, sA, sB = iterated_pd(strat_a, strat_b, 100)
    print(
        f"{strat_a.__name__:<18} vs {strat_b.__name__:<18} | Scores: {sA:4.0f}, {sB:4.0f}"
    )
print("-" * 50)
--- 1-v-1 Prisoner's Dilemma Matchups ---
always_c           vs always_d           | Scores:    0,  500
always_c           vs tit_for_tat        | Scores:  297,  302
always_c           vs random_choice      | Scores:  156,  396
always_c           vs grim_trigger       | Scores:  300,  300
always_c           vs win_stay_lose_shift | Scores:    0,  500
always_d           vs tit_for_tat        | Scores:  104,   99
always_d           vs random_choice      | Scores:  312,   47
always_d           vs grim_trigger       | Scores:  104,   99
always_d           vs win_stay_lose_shift | Scores:  300,   50
tit_for_tat        vs random_choice      | Scores:  217,  217
tit_for_tat        vs grim_trigger       | Scores:  103,  103
tit_for_tat        vs win_stay_lose_shift | Scores:  199,  199
random_choice      vs grim_trigger       | Scores:   60,  295
random_choice      vs win_stay_lose_shift | Scores:  245,  200
grim_trigger       vs win_stay_lose_shift | Scores:  300,  300
--------------------------------------------------

3.2. Spatial Prisoner's Dilemma on WS-Graphs¶

Now we'll place agents on a Watts-Strogatz network. Each agent plays a strategy. This simulation is a bit different: each node adopts the strategy of its neighbour (or itself) who performed best in the last round. This is an "imitate the best" or "imitate the majority" model.

Note: The play_single_round function below uses the provided strategy (e.g., tit_for_tat) for every node to decide its next move, which is a bit unusual. A more common model is that nodes have a fixed strategy (like 'AlwaysD' or 'TFT') and they switch strategies by imitating successful neighbours.

The model as written:

  1. All nodes use the same complex strategy (e.g., tit_for_tat).
  2. They play this strategy against a "virtual opponent" representing the majority of their neighbours' last actions.
  3. This isn't a 1-v-1 game with each neighbour.
In [ ]:
# --- Network State Initialization Helpers ---


def init_clustered_groups(G, n, cluster_size=5):
    """Initialize clustered groups of cooperators and defectors."""
    nodes = list(G.nodes())
    state = np.zeros(n, dtype=np.int8)  # All defectors (0)

    num_cooperator_nodes = n // 2
    num_clusters = max(1, num_cooperator_nodes // cluster_size)

    rng = np.random.default_rng()
    cluster_starts = rng.choice(n, size=num_clusters, replace=False)

    for start in cluster_starts:
        for offset in range(cluster_size):
            node_idx = (start + offset) % n
            if node_idx < len(nodes):
                state[node_idx] = 1  # Cooperator (1)

    set_state_vector(G, state)
    return G


def init_percentage_cooperators(G, n, coop_percent):
    """Initialize network with a random percentage of cooperators."""
    state = np.zeros(n, dtype=np.int8)  # All defectors (0)
    coop_count = int(n * coop_percent)

    nodes = list(G.nodes())
    coop_nodes_idx = np.random.choice(n, coop_count, replace=False)
    state[coop_nodes_idx] = 1  # Cooperator (1)

    set_state_vector(G, state)
    return G


def init_strategy_random(G, strategy1, strategy2):
    """Initialize network with random strategies assigned to nodes."""
    nodes = list(G.nodes())
    n = len(nodes)
    strat_vector = np.empty(n, dtype=object)

    for i in range(n):
        strat_vector[i] = random.choice([strategy1, strategy2])

    set_strategy_vector(G, strat_vector)
    return G


# --- Network State Get/Set Helpers ---


def _node_order(G):
    # Provides a stable order for state vectors
    return list(G.nodes())


def get_state_vector(G):
    nodes = _node_order(G)
    return np.array(
        [int(G.nodes[u].get("state", 0)) for u in nodes], dtype=np.int8
    )


def set_state_vector(G, vec):
    nodes = _node_order(G)
    for i, u in enumerate(nodes):
        G.nodes[u]["state"] = int(vec[i])


def get_strategy_vector(G):
    nodes = _node_order(G)
    return np.array(
        [G.nodes[u].get("strategy", None) for u in nodes], dtype=object
    )


def set_strategy_vector(G, strat):
    nodes = _node_order(G)
    for i, u in enumerate(nodes):
        G.nodes[u]["strategy"] = strat[i]


def ensure_pd_attrs(G):
    """Initialize node attributes for the PD simulation."""
    for u in G.nodes():
        G.nodes[u].setdefault("state", 0)  # 0=Defect, 1=Cooperate
        G.nodes[u].setdefault("my_hist", [])
        G.nodes[u].setdefault(
            "opp_hist", []
        )  # Opponent is 'neighborhood majority'


# --- Simulation Step Function ---


def play_single_round(
    G,
    strategy: Strategy,
    t: int,  # Current time step
    payoffs=DEFAULT_PAYOFFS,
    prob_coop_to_defect=0.0,
    prob_defect_to_coop=0.0,
    rng=None,
):
    """
    Performs one synchronous step of the network PD simulation.
    Each node plays the *same* strategy against its neighborhood.
    """
    if rng is None:
        rng = np.random.default_rng()

    ensure_pd_attrs(G)
    nodes = _node_order(G)
    N = len(nodes)

    # 1. Determine "opponent's last move" for each node
    for i, u in enumerate(nodes):
        nbrs = list(G.neighbors(u))
        if nbrs:
            # Get neighbors' current (last round's) state
            cooperating_neighbours = sum(
                int(G.nodes[v]["state"]) for v in nbrs
            )
            defecting_neighbours = len(nbrs) - cooperating_neighbours
            G.nodes[u]["opp_last"] = (
                1 if cooperating_neighbours > defecting_neighbours else 0
            )  # Majority vote (tie -> defect)

    # 2. Ask strategy for intended next move
    intended = np.empty(N, dtype=np.int8)
    for i, u in enumerate(nodes):
        my_hist = G.nodes[u]["my_hist"]
        opp_hist = G.nodes[u]["opp_hist"]
        node_strategy = G.nodes[u].get("strategy", strategy)
        intended[i] = int(node_strategy(my_hist, opp_hist, t))

    # 3. Add execution noise (mutation)
    if prob_coop_to_defect > 0 or prob_defect_to_coop > 0:
        flips = np.zeros(N, dtype=bool)
        c_idx = np.where(intended == 1)[0]
        if c_idx.size:
            flips[c_idx] = rng.random(c_idx.size) < prob_coop_to_defect

        d_idx = np.where(intended == 0)[0]
        if d_idx.size:
            # Use |= to not overwrite flips from co-op->defect (if noise > 0.5)
            flips[d_idx] |= rng.random(d_idx.size) < prob_defect_to_coop

        next_s = intended ^ flips.astype(np.int8)
    else:
        next_s = intended

    # 4. Commit and update histories
    for i, u in enumerate(nodes):
        G.nodes[u]["state"] = int(next_s[i])
        G.nodes[u]["my_hist"].append(int(next_s[i]))
        G.nodes[u]["opp_hist"].append(int(G.nodes[u]["opp_last"]))

    return G


# --- Simulation Runner ---


def run_sim(
    G,
    cap=250,
    step_fn=play_single_round,
    **step_kwargs,  # e.g., strategy=..., rng=...
):
    """
    Run simulation until state is stable (repeats) or cap is reached.
    """
    seen = set()
    key = tuple(get_state_vector(G))
    seen.add(key)

    # Ensure 't' is passed to step_fn if it's not in step_kwargs
    if "t" not in step_kwargs:
        step_kwargs["t"] = 0

    for t in range(cap):
        step_kwargs["t"] = t
        G = step_fn(G, **step_kwargs)
        key = tuple(get_state_vector(G))
        if key in seen:
            break  # State has repeated, converged to a cycle
        seen.add(key)
        # print(np.sum(list(seen)[0]) / G.number_of_nodes())  #

    return G  # Return the final graph state


# --- Experiment Driver ---


def realized_shortcuts(G, k):
    """Compute fraction of edges that are 'long-range' shortcuts."""
    n = G.number_of_nodes()
    if n == 0:
        return 0.0

    half_k = k // 2
    long = 0
    total_edges = G.number_of_edges()
    if total_edges == 0:
        return 0.0

    # This assumes nodes are 0-indexed and G was built on a ring
    for u, v in G.edges():
        d = abs(u - v)
        ring_dist = min(d, n - d)
        if ring_dist > half_k:
            long += 1
    return long / total_edges


def sweep_pd_simulation(
    n=1000,
    k=4,
    cluster_size=10,
    betas=np.linspace(0, 0.4, 41),
    trials=40,
    seed=7,
    strategy=tit_for_tat,
    init_fn=init_clustered_groups,
    strategy1=tit_for_tat,
    strategy2=always_d,
):
    """Sweep across different shortcut percentages and track cooperation."""
    rng = np.random.default_rng(seed)

    avg_shortcuts, avg_cooperation = [], []
    all_shortcuts, all_cooperation = [], []
    fig, ax = plt.subplots(figsize=(12, 6))
    for p in betas:
        print(f"Running PD sweep for p={p:.3f}...")
        trial_shortcuts, trial_finals = [], []
        my_hists_trials, opp_hists_trials = [], []
        my_strategy_hists_trials, opp_strategy_hists_trials = [], []

        for r in range(trials):
            # 1. Generate the small-world network
            trial_seed = seed + r
            G = create_ws_graph(N=n, k=k, p=float(p), seed=trial_seed)
            pct_shortcuts = realized_shortcuts(G, k)

            # 2. Initial state
            # Note: init_clustered_groups assumes ring indices 0..n-1
            if init_fn == init_clustered_groups:
                G = init_clustered_groups(G, n, cluster_size=cluster_size)
            else:
                G = init_percentage_cooperators(G, n, coop_percent=0.5)

            init_strategy_random(G, strategy1, strategy2)

            # 3. Run the simulation
            sf_graph = run_sim(
                G,
                strategy=strategy,
                cap=50,
                rng=np.random.default_rng(trial_seed),
            )
            # Collect histories

            my_hists = [sf_graph.nodes[u]["my_hist"] for u in sf_graph.nodes()]
            opp_hists = [
                sf_graph.nodes[u]["opp_hist"] for u in sf_graph.nodes()
            ]
            my_hists_trials.append(np.array(my_hists).sum(axis=0))
            opp_hists_trials.append(np.array(opp_hists).sum(axis=0))

            ######################################
            # Collect strategies
            my_strategies = [
                sf_graph.nodes[u]["my_hist"]
                for u in sf_graph.nodes()
                if sf_graph.nodes[u]["strategy"] == strategy1
            ]
            opp_strategies = [
                sf_graph.nodes[u]["opp_hist"]
                for u in sf_graph.nodes()
                if sf_graph.nodes[u]["strategy"] == strategy2
            ]
            my_strategy_hists_trials.append(
                np.array(my_strategies).sum(axis=0)
            )
            opp_strategy_hists_trials.append(
                np.array(opp_strategies).sum(axis=0)
            )

            ########

            # 4. Record final state
            # **FIX:** Get mean from state vector, not graph object
            final_state_vector = get_state_vector(sf_graph)
            final_cooperation = (
                np.mean(final_state_vector)
                if final_state_vector.size > 0
                else 0
            )

            trial_shortcuts.append(pct_shortcuts)
            trial_finals.append(final_cooperation)

        # Store average for this 'p'
        avg_shortcuts.append(np.mean(trial_shortcuts))
        avg_cooperation.append(np.mean(trial_finals))
        all_shortcuts.extend(trial_shortcuts)
        all_cooperation.extend(trial_finals)

        # 5. Plot average histories over trials for this 'p'

        # TODO: THIS BADLY NEEDS REFACTORING!

        max_len_my_hist = max(len(hist) for hist in my_hists_trials)
        padded_my_hists = np.array(
            [
                np.pad(
                    hist, (0, max_len_my_hist - len(hist)), constant_values=0
                )
                for hist in my_hists_trials
            ]
        )
        merged_my_hists = np.sum(padded_my_hists, axis=0)
        max_len_opp_hist = max(len(hist) for hist in opp_hists_trials)
        padded_opp_hists = np.array(
            [
                np.pad(
                    hist, (0, max_len_opp_hist - len(hist)), constant_values=0
                )
                for hist in opp_hists_trials
            ]
        )
        merged_opp_hists = np.sum(padded_opp_hists, axis=0)

        colors = plt.cm.viridis(p / max(betas), alpha=0.7)

        plt.plot(
            merged_my_hists,
            label="Merged Self Average (p = {:.3f})".format(p),
            color=colors,
        )
        plt.plot(
            merged_opp_hists,
            label="Merged Opponent Average (p = {:.3f})".format(p),
            color=colors,
        )
    plt.title(
        f"Node History Averages over Time (Strategies: {strategy1.__name__} vs {strategy2.__name__})"
    )
    plt.xlabel("Time Step")
    plt.ylabel("Cooperation Level")
    plt.legend()
    plt.grid(True)
    plt.show()

    return (
        np.array(avg_shortcuts),
        np.array(avg_cooperation),
        np.array(all_shortcuts),
        np.array(all_cooperation),
        sf_graph,
        my_strategy_hists_trials,
        opp_strategy_hists_trials,
    )


# %%
# --- Run and Plot Section 3.2 ---

n_pd = 1000
k_pd = 10
cluster_pd = 5
betas_pd = np.linspace(0, 0.5, 10)
trials_pd = 30  # Reduced for speed
seed_pd = 7
strategy1 = tit_for_tat  # Change strategy here
strategy2 = tit_for_tat  # Change strategy here

print("--- Starting Prisoner's Dilemma Network Sweep ---")
(
    x_avg,
    y_avg,
    x_all,
    y_all,
    sf_graph,
    my_strategy_hists_trials,
    opp_strategy_hists_trials,
) = sweep_pd_simulation(
    n=n_pd,
    k=k_pd,
    cluster_size=cluster_pd,
    betas=betas_pd,
    trials=trials_pd,
    seed=seed_pd,
    strategy1=strategy1,
    strategy2=strategy2,
    init_fn=init_percentage_cooperators,
)
print("--- PD Sweep Complete ---")

# 1. Plot average cooperation
plt.figure(figsize=(7, 5))
plt.plot(100 * x_avg, 100 * y_avg, lw=2, marker="o", label="Average")
plt.xlabel("% of shortcuts (Realized)")
plt.ylabel("% of cooperators (Final)")
plt.title(
    f"Cooperation vs Shortcuts (Strategies: {strategy1.__name__} vs {strategy2.__name__})"
)
plt.grid(True, alpha=0.3)
plt.ylim(-5, 105)
plt.show()

# 2. Plot all trials (scatter)
plt.figure(figsize=(7, 5))
plt.scatter(100 * x_all, 100 * y_all, lw=0, alpha=0.1, label="All Trials")
# Re-plot average on top
plt.plot(
    100 * x_avg, 100 * y_avg, lw=2, marker="o", color="red", label="Average"
)
plt.xlabel("% of shortcuts (Realized)")
plt.ylabel("% of cooperators (Final)")
plt.title(f"Cooperation vs Shortcuts (All Trials, N={n_pd}, k={k_pd})")
plt.grid(True, alpha=0.3)
plt.legend()
plt.ylim(-5, 105)
plt.show()
--- Starting Prisoner's Dilemma Network Sweep ---
Running PD sweep for p=0.000...
Running PD sweep for p=0.056...
Running PD sweep for p=0.111...
Running PD sweep for p=0.167...
Running PD sweep for p=0.222...
Running PD sweep for p=0.278...
Running PD sweep for p=0.333...
Running PD sweep for p=0.389...
Running PD sweep for p=0.444...
Running PD sweep for p=0.500...
No description has been provided for this image
--- PD Sweep Complete ---
No description has been provided for this image
No description has been provided for this image
In [511]:
n_pd = 100
k_pd = 3
cluster_pd = 5
betas_pd = [0.1]  # np.linspace(0, 0.5, 10)
trials_pd = 1  # Reduced for speed
seed_pd = 7
strategy = tit_for_tat  # Change strategy here

print("--- Starting Prisoner's Dilemma Network Sweep ---")
G = create_ws_graph(N=n_pd, k=k_pd, p=float(betas_pd[0]), seed=seed_pd)
G = init_clustered_groups(G, n_pd, cluster_size=cluster_pd)
sf_graph = run_sim(
    G,
    strategy=strategy,
    cap=50,
    rng=np.random.default_rng(42),
)

node_hist = [
    np.array(sf_graph.nodes[u]["my_hist"], dtype=np.int8)
    for u in sf_graph.nodes()
]
node_hist_array = np.array(node_hist)
snapshots_of_states = [
    (i, node_hist_array[:, i]) for i in range(node_hist_array.shape[1])
]
sample_size = min(200, sf_graph.number_of_nodes())

print("--- Plotting Snapshots ---")
plot_snapshots_circular(
    sf_graph,
    snapshots_of_states,
    k=k_pd,
    sample_size=sample_size,
    layout_seed=0,
    model="Prisoner's Dilemma",
)
--- Starting Prisoner's Dilemma Network Sweep ---
--- Plotting Snapshots ---
Out[511]:
No description has been provided for this image

Very quickly, we can see defection dominates.

In [512]:
max_len_my_hist = max(len(hist) for hist in my_strategy_hists_trials)
padded_my_hists = np.array(
    [
        np.pad(hist, (0, max_len_my_hist - len(hist)), constant_values=0)
        for hist in my_strategy_hists_trials
    ]
)
merged_my_hists = np.sum(padded_my_hists, axis=0)
max_len_opp_hist = max(len(hist) for hist in opp_strategy_hists_trials)
padded_opp_hists = np.array(
    [
        np.pad(hist, (0, max_len_opp_hist - len(hist)), constant_values=0)
        for hist in opp_strategy_hists_trials
    ]
)
merged_my_hists = np.sum(padded_my_hists, axis=0)
max_len_opp_hist = max(len(hist) for hist in opp_strategy_hists_trials)
padded_opp_hists = np.array(
    [
        np.pad(hist, (0, max_len_opp_hist - len(hist)), constant_values=0)
        for hist in opp_strategy_hists_trials
    ]
)
merged_opp_hists = np.sum(padded_opp_hists, axis=0)


plt.plot(merged_my_hists, label="Strategy 1 ({})".format(strategy1.__name__))
plt.plot(merged_opp_hists, label="Strategy 2 ({})".format(strategy2.__name__))
plt.title(
    f"Node History Averages over Time for {strategy1.__name__} vs {strategy2.__name__}"
)
plt.xlabel("Time Step")
plt.ylabel("Cooperation Level")
plt.legend()
plt.grid(True)
plt.show()
No description has been provided for this image

3.3. Alternative Model: Spatial PD (Nowak-May, Imitate Best Payoff)¶

The script also contained a different, more standard model for spatial games. In this model, agents have a fixed strategy (Cooperate or Defect), and in each round, they imitate the strategy of their best-performing neighbor (or themselves).

This shows how shortcuts can break down clusters of cooperation.

In [513]:
# --- Nowak–May PD Model Helpers ---
C, D = 1, 0  # Cooperate, Defect


def pd_round_payoffs(G, state_dict, b=1.4):
    """Calculates payoffs for all nodes in a 1-v-1 PD with all neighbors."""
    # Payoffs: R=1, P=0, S=0, T=b
    payoff = {v: 0.0 for v in G.nodes()}
    for u, v in G.edges():
        su, sv = state_dict.get(u, D), state_dict.get(v, D)
        if su == C and sv == C:
            payoff[u] += 1
            payoff[v] += 1
        elif su == C and sv == D:
            payoff[u] += 0  # Sucker
            payoff[v] += b  # Temptation
        elif su == D and sv == C:
            payoff[u] += b  # Temptation
            payoff[v] += 0  # Sucker
        else:  # D, D
            payoff[u] += 0  # Punishment
            payoff[v] += 0  # Punishment
    return payoff


def imitate_best_neighbor(G, state, b):
    """
    Synchronous update: all nodes adopt the strategy of their
    best-performing neighbor (or themselves) from the last round.
    """
    payoff = pd_round_payoffs(G, state, b)
    new_state = {}
    for v in G.nodes():
        neighbors_and_self = [v] + list(G.neighbors(v))

        # Find neighbor (or self) with the highest payoff
        best_agent = max(
            neighbors_and_self,
            key=lambda u: (
                payoff[u],
                state[u] == state[v],
            ),  # Tie-break: prefer own strategy
        )
        new_state[v] = state[best_agent]
    return new_state


def run_spatial_pd_imitate(G, b=1.4, rounds=150, seed_frac=0.5, seed=1):
    """Runs the "imitate-best-neighbor" simulation."""
    rng = np.random.default_rng(seed)
    nodes = list(G.nodes())
    N = len(nodes)
    if N == 0:
        return np.zeros(rounds + 1)

    # Start with a random mix of cooperators
    state = {v: D for v in nodes}
    coop_nodes = rng.choice(nodes, size=int(seed_frac * N), replace=False)
    for v in coop_nodes:
        state[v] = C

    frac_coop = [sum(state.values()) / N]
    states_list = []
    for _ in range(rounds):
        state = imitate_best_neighbor(G, state, b)

        frac_coop.append(sum(state.values()) / N)
        states_list.append(state)

    return np.array(frac_coop), states_list


# %%
# --- Run and Plot Section 3.3 ---

N_pd_alt = 500  # Grid size (N*N) or nodes
k_pd_alt = 10  # k for WS graph
b_pd_alt = 1.5  # Temptation payoff
T_pd_alt = 100  # Rounds

print("--- Running 'Imitate Best' PD Model ---")
# 1. Clustered Lattice (p=0)
G_lattice = create_ws_graph(N=N_pd_alt, k=k_pd_alt, p=0.0, seed=7)
f_lat, states_list_lat = run_spatial_pd_imitate(
    G_lattice, b=b_pd_alt, rounds=T_pd_alt, seed_frac=0.5, seed=2
)

# 2. Small-world (p=0.02)
G_small_world = create_ws_graph(N=N_pd_alt, k=k_pd_alt, p=0.02, seed=7)
f_sw, states_list_sw = run_spatial_pd_imitate(
    G_small_world, b=b_pd_alt, rounds=T_pd_alt, seed_frac=0.5, seed=2
)

# 3. Random Graph (p=1.0)
G_random = create_ws_graph(N=N_pd_alt, k=k_pd_alt, p=1.0, seed=7)
f_rand, states_list_rand = run_spatial_pd_imitate(
    G_random, b=b_pd_alt, rounds=T_pd_alt, seed_frac=0.5, seed=2
)


plt.figure(figsize=(10, 5))
plt.plot(f_lat, label="Lattice (p=0, clustered)")
plt.plot(f_sw, label="Small-world (p=0.02)")
plt.plot(f_rand, label="Random (p=1.0)")
plt.xlabel("Round")
plt.ylabel("Fraction Cooperating")
plt.title(f"Spatial PD (Imitate Best), b={b_pd_alt}")
plt.legend()
plt.ylim(-0.05, 1.05)
plt.show()
--- Running 'Imitate Best' PD Model ---
No description has been provided for this image
In [515]:
print("--- Running 'Imitate Best' PD Model ---")
# 1. Clustered Lattice (p=0)
G_lattice = create_ws_graph(N=N_pd_alt, k=k_pd_alt, p=0.0, seed=7)
f_lat, states_list_lat = run_spatial_pd_imitate(
    G_lattice, b=b_pd_alt, rounds=T_pd_alt, seed_frac=0.5, seed=2
)


b_pd_alt = 1.2  # Temptation payoff
# 2. Small-world (p=0.02)
G_small_world = create_ws_graph(N=N_pd_alt, k=k_pd_alt, p=0.02, seed=7)
f_sw, states_list_sw = run_spatial_pd_imitate(
    G_small_world, b=b_pd_alt, rounds=T_pd_alt, seed_frac=0.5, seed=2
)

# 3. Random Graph (p=1.0)
G_random = create_ws_graph(N=N_pd_alt, k=k_pd_alt, p=1.0, seed=7)
f_rand, states_list_rand = run_spatial_pd_imitate(
    G_random, b=b_pd_alt, rounds=T_pd_alt, seed_frac=0.5, seed=2
)


plt.figure(figsize=(10, 5))
plt.plot(f_lat, label="Lattice (p=0, clustered)")
plt.plot(f_sw, label="Small-world (p=0.02)")
plt.plot(f_rand, label="Random (p=1.0)")
plt.xlabel("Round")
plt.ylabel("Fraction Cooperating")
plt.title(f"Spatial PD (Imitate Best), b={b_pd_alt}")
plt.legend()
plt.ylim(-0.05, 1.05)
plt.show()


# Convert states_list_sw to snapshot format
snapshots_sw = []
for t, state_dict in enumerate(states_list_sw):
    # Convert dictionary to array in node order
    nodes = list(G_small_world.nodes())
    state_array = np.array([state_dict[node] for node in nodes], dtype=np.int8)
    snapshots_sw.append((t, state_array))
print("--- Plotting Snapshots for Small-world PD for b=1.2 ---")
plot_snapshots_circular(
    G_small_world,
    snapshots_sw,
    k=k_pd_alt,
    sample_size=200,
    layout_seed=0,
    model="Prisoner's Dilemma",
)
--- Running 'Imitate Best' PD Model ---
No description has been provided for this image
--- Plotting Snapshots for Small-world PD for b=1.2 ---
Out[515]:
No description has been provided for this image

This experiment shows that network structure alone can radically change the outcome of the same prisoner’s dilemma game. We use payoffs:

  • R = 1 (Reward for mutual cooperation),
  • T = b (Temptation to defect),
  • P = 0 (Punishment for mutual defection),
  • S = 0 (Sucker's payoff).

The temptation to defect, b, represents how much better you do by defecting against a cooperator than by mutual cooperation. For b = 1.5, a lone defector surrounded by cooperators earns 1.5 per neighbor, while cooperators earn at most 1.

This strong advantage means that on both the lattice (p = 0) and small-world (p ≈ 0.02) graphs, defectors sitting at the edges of cooperator clusters almost always have the highest local payoff. Under the “imitate the best neighbor” rule, they get copied and rapidly wipe out cooperation, with shortcuts in the small-world case just speeding up the invasion.

On the fully random graph (p = 1), by contrast, the system settles into a stable coexistence with about 40% cooperators. This is because some agents find themselves in cooperator-rich neighborhoods where, even with b = 1.5, cooperating still yields more total payoff than defecting. These pockets can resist invasion.

If we reduce b, we should expect to see that cooperation can persist even on clustered networks, as the advantage of defectors at cluster edges diminishes.

Thus, for this payoff and update rule:

  • Clustering and small-world structure amplify the temptation to defect and undermine cooperation.
  • Random mixing allows cooperation to persist.

So, the moral of the story is, if you want to promote cooperation, you need to either reduce the temptation to defect (lower b, i.e. reduce the payoff advantage for defectors) or introduce more randomness in who interacts with whom (higher p, i.e. more shortcuts), at least in this kind of model. Unless you want to stop the spread of transmissible diseases, of course, then you want exactly the opposite network: few shortcuts and highly clustered contacts, only local interactions, no house parties!

Further reading:¶

https://www.cambridge.org/core/journals/experimental-economics/article/social-preferences-under-the-shadow-of-the-future/695F55995DAEC07D448CD26D0B6470D2

I'm really interested in this topic and hope to explore it further in future projects!