Home/Blog/Space Complexity and In-Place Algorithms: When Memory Matters and How to Optimise It
DSAAlgorithmsInterviewArrays

Space Complexity and In-Place Algorithms: When Memory Matters and How to Optimise It

Time complexity gets all the attention. But space complexity decides whether your code runs in a memory-constrained environment β€” and interviewers test it more than most students expect. This guide covers every space optimisation technique with real examples.

S
SCS TeamΒ·20 February 2026Β·10 min read

Most students learn to optimise for time. Space optimisation is the forgotten skill β€” until an interviewer asks "can you do this in O(1) space?" and you have no answer.

Space constraints appear more often than you'd think:

  • Embedded systems with kilobytes of RAM
  • Processing 10TB datasets that can't fit in memory
  • Mobile apps where memory spikes cause crashes
  • Competitive programming with 256MB memory limits

This guide teaches you to think about space, recognise when it can be reduced, and apply the concrete techniques that turn O(n) space solutions into O(1).


What Space Complexity Actually Measures

Space complexity measures auxiliary space β€” the extra memory your algorithm uses beyond the input itself.

# O(n) auxiliary space β€” creates a new list of size n
def reverse_copy(arr):
    return arr[::-1]  # new list, n elements

# O(1) auxiliary space β€” modifies in place, uses 2 temp variables
def reverse_inplace(arr):
    left, right = 0, len(arr) - 1
    while left < right:
        arr[left], arr[right] = arr[right], arr[left]
        left += 1
        right -= 1

Note: the input itself isn't counted in auxiliary space. Only what your algorithm adds.


Technique 1: Two-Pointer Swap (Reverse In-Place)

Any time you're creating a reversed or reordered copy, ask: can I rearrange the original?

# Rotate array k positions β€” O(n) space naive
def rotate_naive(nums, k):
    n = len(nums)
    k = k % n
    return nums[-k:] + nums[:-k]  # new list

# Rotate array k positions β€” O(1) space with reversal trick
def rotate_inplace(nums, k):
    n = len(nums)
    k = k % n
    
    def reverse(start, end):
        while start < end:
            nums[start], nums[end] = nums[end], nums[start]
            start += 1; end -= 1
    
    reverse(0, n - 1)      # reverse entire array
    reverse(0, k - 1)      # reverse first k elements
    reverse(k, n - 1)      # reverse remaining elements

The reversal trick: Rotating by k is equivalent to three reversals. Memorise this β€” it appears in interview problems and shows you understand in-place rearrangement.


Technique 2: Use the Input Array as Extra Storage

Some problems allow you to repurpose the input array to store intermediate results.

# Find all numbers in [1, n] missing from array of size n
# Naive: O(n) extra set
def find_missing_naive(nums):
    return [i for i in range(1, len(nums) + 1) if i not in set(nums)]

# O(1) space: use sign of arr[abs(num)-1] as a visited marker
def find_missing_inplace(nums):
    n = len(nums)
    
    # Step 1: Mark visited indices by negating the value at that index
    for num in nums:
        idx = abs(num) - 1   # value 3 β†’ index 2
        if nums[idx] > 0:
            nums[idx] = -nums[idx]
    
    # Step 2: Indices with positive values are missing numbers
    result = []
    for i in range(n):
        if nums[i] > 0:
            result.append(i + 1)
    
    # Optional: restore original (if required)
    for i in range(n):
        nums[i] = abs(nums[i])
    
    return result

When this works: Input values are in a predictable range (here 1 to n), and the values can encode extra information (using sign bit).


Technique 3: Two-Pass Instead of Extra Data Structure

Sometimes a second pass over the input eliminates the need for auxiliary storage.

# Product of array except self
# Naive: O(n) space for result array (not counting output)
def product_except_self_naive(nums):
    n = len(nums)
    left = [1] * n
    right = [1] * n
    
    for i in range(1, n):
        left[i] = left[i-1] * nums[i-1]
    for i in range(n-2, -1, -1):
        right[i] = right[i+1] * nums[i+1]
    
    return [left[i] * right[i] for i in range(n)]

# O(1) extra space (output array doesn't count as extra space)
def product_except_self(nums):
    n = len(nums)
    result = [1] * n
    
    # First pass: fill result with left products
    prefix = 1
    for i in range(n):
        result[i] = prefix
        prefix *= nums[i]
    
    # Second pass: multiply by right products
    suffix = 1
    for i in range(n - 1, -1, -1):
        result[i] *= suffix
        suffix *= nums[i]
    
    return result

The key insight: instead of building two arrays (left products, right products), reuse the output array for left products and compute right products on-the-fly in a second pass.


Technique 4: DP Space Optimisation β€” From O(nΒ²) to O(n) to O(1)

Many DP problems can be space-optimised by recognising that only a few previous states are needed.

Fibonacci: O(n) β†’ O(1)

# O(n) space β€” stores all n values
def fib_table(n):
    dp = [0] * (n + 1)
    dp[1] = 1
    for i in range(2, n + 1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

# O(1) space β€” only need last two values
def fib_optimised(n):
    if n <= 1: return n
    prev2, prev1 = 0, 1
    for _ in range(2, n + 1):
        curr = prev1 + prev2
        prev2, prev1 = prev1, curr
    return prev1

Rule: If dp[i] only depends on dp[i-1] and dp[i-2], you only need 2 variables, not an array of n.

Grid DP: O(mΓ—n) β†’ O(n)

# Unique paths: O(mΓ—n) space
def unique_paths_2d(m, n):
    dp = [[1] * n for _ in range(m)]
    for i in range(1, m):
        for j in range(1, n):
            dp[i][j] = dp[i-1][j] + dp[i][j-1]
    return dp[m-1][n-1]

# O(n) space β€” only need current and previous row
def unique_paths_1d(m, n):
    dp = [1] * n   # previous row (all 1s initially)
    for i in range(1, m):
        for j in range(1, n):
            dp[j] += dp[j-1]  # dp[j] = above + left
    return dp[n-1]

Rule: If row i only depends on row i-1, keep only one row (previous), and update it in-place.

Knapsack DP: O(nΓ—W) β†’ O(W)

# 0/1 Knapsack β€” O(nΓ—W) space
def knapsack_2d(weights, values, W):
    n = len(weights)
    dp = [[0] * (W + 1) for _ in range(n + 1)]
    for i in range(1, n + 1):
        for w in range(W + 1):
            dp[i][w] = dp[i-1][w]
            if weights[i-1] <= w:
                dp[i][w] = max(dp[i][w], dp[i-1][w - weights[i-1]] + values[i-1])
    return dp[n][W]

# O(W) space β€” iterate capacity backwards to avoid using current row's values
def knapsack_1d(weights, values, W):
    dp = [0] * (W + 1)
    for i in range(len(weights)):
        for w in range(W, weights[i] - 1, -1):  # must go backwards
            dp[w] = max(dp[w], dp[w - weights[i]] + values[i])
    return dp[W]

Why backwards? Going forwards would let item i be used multiple times (unbounded knapsack). Going backwards ensures we only see values from before item i was considered.


Technique 5: Bit Manipulation for Constant Space

Find the single number (XOR trick):

# Every number appears twice except one. Find it in O(1) space.
def single_number(nums):
    result = 0
    for num in nums:
        result ^= num   # XOR: a ^ a = 0, a ^ 0 = a
    return result
# [2, 2, 1] β†’ 0 ^ 2 ^ 2 ^ 1 = 0 ^ 0 ^ 1 = 1 βœ“

Find two unique numbers:

# Every number appears twice except two. Find both in O(1) space.
def single_number_iii(nums):
    xor = 0
    for num in nums:
        xor ^= num   # xor = a XOR b (the two unique numbers)
    
    # Find a bit that's different between a and b (rightmost set bit)
    diff_bit = xor & (-xor)
    
    a = 0
    for num in nums:
        if num & diff_bit:
            a ^= num   # a is one of the unique numbers
    
    return [a, xor ^ a]

Technique 6: Floyd's Cycle Detection (O(1) Space for Cycle Problems)

When you'd normally use a hash set to track visited nodes:

# Find duplicate in array [1..n] with n+1 elements β€” O(1) space
def find_duplicate(nums):
    # Treat array as linked list: index is node, nums[i] is next pointer
    # Duplicate value means two nodes point to the same "next"
    slow = fast = nums[0]
    
    while True:
        slow = nums[slow]
        fast = nums[nums[fast]]
        if slow == fast:
            break
    
    # Find entry point of cycle
    slow = nums[0]
    while slow != fast:
        slow = nums[slow]
        fast = nums[fast]
    
    return slow

A hash set would be O(n) space. Floyd's algorithm uses O(1) by treating the array as an implicit graph.


When Space Optimisation is Worth It (and When It Isn't)

Optimise space when:

  • Interviewer asks explicitly for O(1) space
  • Working with very large datasets (millions of rows)
  • Mobile/embedded context with strict memory limits
  • The O(1) solution is reasonably readable

Don't sacrifice clarity for space when:

  • The difference is only a constant factor (O(n) vs O(2n))
  • The code becomes significantly harder to understand and maintain
  • The interviewer didn't ask for it

In an interview: First write the clean O(n) space solution, explain it, then say: "If space is a constraint, I can optimise this to O(1) by [technique]. Would you like me to do that?" This shows both competence and professional judgment.

Practice: Go back to any DP problem you've solved. Check if you're using a full 2D array when a 1D array would suffice. Optimising space after solving is a great habit. Use SCS Practice to find DP problems to practice on.

Ready to practice what you just learned?

Apply these concepts with AI-powered tools built for CS students.