Python Full-Stack Interview Questions 76–80 (System Design & Classic Algorithms)

Welcome! This lesson is a fantastic mix of the high-level and the hands-on, which is very common in interviews. We'll start with a high-level system design question about building fast, reliable services. Then, we'll dive into four classic, hands-on coding problemsthat test your algorithm and data structure knowledge. Take a calm breath; these are all building blocks, and we'll go through them one by one.

76. How do you architect and test code for high availability and load (horizontal scaling, stateless services, caching layers)?

This is a big-picture question about building systems that don't crash under heavy traffic and can handle failures gracefully. The goal is to avoid having a single point of failure.

Analogy: Think of a single, very popular food truck.

  • The Problem:If the truck gets too popular, the line gets huge (high load). If the truck's engine breaks (failure), everyone goes hungry (no availability).
  • Vertical Scaling:You could buy one massive, super-expensive truck. This is hard to do, expensive, and it's still a single point of failure.
  • Horizontal Scaling (The Solution):Instead, you add five more identical, cheap trucks . This is horizontal scaling—adding more machines (servers) rather than making one machine bigger.

Here’s how the key concepts work together to support this:

  • Load Balancer:This is the "person at the front" who directs customers to the shortest line (Truck 1, Truck 2, etc.). In tech, a load balancer (like Nginx or AWS ELB) distributes incoming network traffic across your multiple servers. If one server (truck) crashes, the load balancer just stops sending traffic to it.
  • Stateless Services:This is the most important rulefor horizontal scaling. Your servers (trucks) must be "stateless." This means the server does not remember anything about the user from one request to the next.
    Analogy: The worker in Truck 1 shouldn't "remember" your order. Your "state" (your order, your shopping cart) is stored on your "ticket" (a token or session ID) or in a separate, shared database. This way, you can talk to anyof the five trucks, and they can all serve you.
  • Caching Layers:This is about reducing the load in the first place.Analogy: You put a small ice chest (a cache like Redis or Memcached) at the front of the line with the 50 most popular drinks. Customers can grab a drink without ever talking to a truck .
    This is an in-memory database that stores the results of expensive queries (e.g., "most popular products") so your application and database don't have to work as hard.

How to Test It:

  • Load Testing: Use a tool like Locust (in Python) or JMeter to simulate thousands of users hitting your service. This helps you find bottlenecks and see how your system performs under load.
  • Chaos Engineering: This is "testing for availability." You deliberately cause failures. For example: while the load test is running, you manually shut down one of your servers. Does the load balancer handle it? Does your site stay up? This proves your system is truly highly available.

Tip: In an interview, you don't need to know every tool. Just explain these three concepts: a load balancerdistributing traffic to multiple stateless servers , all of which are protected by a caching layer .

77. Write a Python function to check if a string is a palindrome.

A palindrome is a string that reads the same forwards and backwards (e.g., "racecar", "madam").

The simplest, most "Pythonic" way uses slicing, but it's often considered a "trick." Interviewers usually prefer to see the two-pointer algorithm , as it proves you understand the logic.

A good interview will also include a follow-up: "How would you handle case, punctuation, and spaces?" We will build a function that handles that.

def is_palindrome_simple(s: str) -> bool:
    """Checks for a palindrome using slicing."""
    return s == s[::-1]

def is_palindrome_robust(s: str) -> bool:
    """
    Checks for a palindrome using the two-pointer algorithm,
    while ignoring case and non-alphanumeric characters.
    """
    
    # 1. Clean the string
    # 'isalnum()' checks for letters or numbers
    cleaned = [char for char in s.lower() if char.isalnum()]
    
    # 2. Use the two-pointer algorithm
    left = 0
    right = len(cleaned) - 1
    
    while left < right:
        if cleaned[left] != cleaned[right]:
            return False
        left += 1
        right -= 1
        
    return True

# --- Usage ---
s1 = "racecar"
s2 = "A man, a plan, a canal: Panama"
s3 = "hello"

print(f"'{s1}' (simple): {is_palindrome_simple(s1)}")
print(f"'{s1}' (robust): {is_palindrome_robust(s1)}")
print("---")
print(f"'{s2}' (robust): {is_palindrome_robust(s2)}")
print("---")
print(f"'{s3}' (simple): {is_palindrome_simple(s3)}")
print(f"'{s3}' (robust): {is_palindrome_robust(s3)}")

Expected Output:

'racecar' (simple): True
'racecar' (robust): True
---
'A man, a plan, a canal: Panama' (robust): True
---
'hello' (simple): False
'hello' (robust): False

78. Write Python code to reverse a linked list.

This is a fundamental data structure problem. You are given the`head` of a linked list, and you need to reverse the `next`pointers so the list points in the opposite direction.

The key is the iterative, three-pointer approach. You need to keep track of three nodes as you walk the list:

  • `prev` : The node that will be behind the current node (starts as `None`).
  • `curr` : The node you are currently looking at (starts as `head`).
  • `next_temp` : A temporary variable to hold the next node before you break the link.

In each loop step, you point `curr.next`to `prev` , and then move all three pointers one step forward.

# First, define the ListNode
class ListNode:
    def __init__(self, val=0, next=None):
        self.val = val
        self.next = next

# Helper function to print a linked list
def print_list(head: ListNode | None):
    if not head:
        print("None")
        return
    
    curr = head
    result = []
    while curr:
        result.append(str(curr.val))
        curr = curr.next
    print(" -> ".join(result))


def reverse_list(head: ListNode | None) -> ListNode | None:
    """Reverses a linked list iteratively."""
    
    prev = None
    curr = head
    
    while curr:
        # 1. Store the next node before we break the link
        next_temp = curr.next
        
        # 2. Reverse the actual pointer
        curr.next = prev
        
        # 3. Move all pointers one step forward
        prev = curr
        curr = next_temp
        
    # At the end, 'curr' is None, and 'prev' is the new head
    return prev

# --- Usage ---

# Create a list: 1 -> 2 -> 3 -> 4
head = ListNode(1)
head.next = ListNode(2)
head.next.next = ListNode(3)
head.next.next.next = ListNode(4)

print("Original list:")
print_list(head)

# Reverse the list
new_head = reverse_list(head)

print("\nReversed list:")
print_list(new_head)

Expected Output:

Original list:
1 -> 2 -> 3 -> 4

Reversed list:
4 -> 3 -> 2 -> 1

79. Write a Python program to implement an LRU cache.

This is a classic "hard" interview question. "LRU" stands for Least Recently Used . The goal is to build a cache of a fixed `capacity`that evicts the least recently useditem when it's full.

The challenge is that both `get`(retrieving an item) and `put`(adding/updating an item) must be O(1) —constant time.

This requires two data structures working together:

  • A Hash Map (Python `dict`):Provides O(1) lookup. The key is the item's key, and the value is a pointerto the node in our linked list.
  • A Doubly Linked List:Provides O(1) removal and addition. We use it to track "recency." The frontof the list is the mostrecently used item, and the endof the list is the leastrecently used.

When we `get` or `put`an item, we move it to the front of the list. When we need to evict, we just pop the item from the tail.

# We need a Doubly Linked List node
class DLinkedNode:
    def __init__(self, key=0, val=0):
        self.key = key
        self.val = val
        self.prev = None
        self.next = None

class LRUCache:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.size = 0
        self.cache = {}  # The hash map
        
        # Dummy head and tail nodes
        self.head = DLinkedNode()
        self.tail = DLinkedNode()
        self.head.next = self.tail
        self.tail.prev = self.head

    # --- Internal helper methods ---
    def _add_node(self, node: DLinkedNode):
        """Adds a new node right after the head."""
        node.prev = self.head
        node.next = self.head.next
        self.head.next.prev = node
        self.head.next = node

    def _remove_node(self, node: DLinkedNode):
        """Removes an existing node from the list."""
        prev_node = node.prev
        next_node = node.next
        prev_node.next = next_node
        next_node.prev = prev_node

    def _move_to_front(self, node: DLinkedNode):
        """Moves a node to the front (used on 'get')."""
        self._remove_node(node)
        self._add_node(node)

    def _pop_tail(self) -> DLinkedNode:
        """Pops the least recently used item (before tail)."""
        node = self.tail.prev
        self._remove_node(node)
        return node

    # --- Public API methods ---
    def get(self, key: int) -> int:
        node = self.cache.get(key)
        if not node:
            return -1
        
        # This item is now 'most recently used'
        self._move_to_front(node)
        return node.val

    def put(self, key: int, value: int) -> None:
        node = self.cache.get(key)
        
        if node:
            # Update existing key
            node.val = value
            self._move_to_front(node)
        else:
            # Add new key
            new_node = DLinkedNode(key, value)
            self.cache[key] = new_node
            self._add_node(new_node)
            self.size += 1
            
            if self.size > self.capacity:
                # Evict the LRU item
                tail = self._pop_tail()
                del self.cache[tail.key]
                self.size -= 1

# --- Usage ---
cache = LRUCache(2)
cache.put(1, 1)    # Cache is {1: 1}
print("Put(1, 1)")
cache.put(2, 2)    # Cache is {1: 1, 2: 2}
print("Put(2, 2)")
print(f"Get(1): {cache.get(1)}") # Returns 1, {2: 2, 1: 1}
cache.put(3, 3)    # LRU key 2 was evicted. Cache is {1: 1, 3: 3}
print("Put(3, 3)")
print(f"Get(2): {cache.get(2)}") # Returns -1 (not found)
cache.put(4, 4)    # LRU key 1 was evicted. Cache is {3: 3, 4: 4}
print("Put(4, 4)")
print(f"Get(1): {cache.get(1)}") # Returns -1 (not found)
print(f"Get(3): {cache.get(3)}") # Returns 3, {4: 4, 3: 3}
print(f"Get(4): {cache.get(4)}") # Returns 4, {3: 3, 4: 4}

Expected Output:

Put(1, 1)
Put(2, 2)
Get(1): 1
Put(3, 3)
Get(2): -1
Put(4, 4)
Get(1): -1
Get(3): 3
Get(4): 4

80. Write Python code to merge two sorted lists.

This is a classic problem that is the foundation of "Merge Sort." You are given two lists (or linked lists) that are already sorted , and you need to combine them into a single sorted list.

We'll show the solution for Linked Lists , as it's the most common way this is asked. The logic is the same for regular Python lists.

The key is to create a dummy head nodefor our new list. Then, we use two pointers, `l1`and `l2` , to walk the input lists. We compare the values, append the smallernode to our new list, and advance the pointer for that list.

# Define the ListNode again for this self-contained example
class ListNode:
    def __init__(self, val=0, next=None):
        self.val = val
        self.next = next

# Helper function to print a linked list
def print_list(head: ListNode | None):
    if not head:
        print("None")
        return
    curr = head
    result = []
    while curr:
        result.append(str(curr.val))
        curr = curr.next
    print(" -> ".join(result))

def merge_two_lists(l1: ListNode | None, l2: ListNode | None) -> ListNode | None:
    # 1. Create a dummy node to build the new list
    dummy_head = ListNode()
    # 'tail' will be our pointer to the end of the new list
    tail = dummy_head
    
    # 2. Loop while both lists have nodes
    while l1 and l2:
        if l1.val < l2.val:
            tail.next = l1
            l1 = l1.next
        else:
            tail.next = l2
            l2 = l2.next
        
        # Advance the tail pointer
        tail = tail.next
        
    # 3. At least one list is now empty.
    #    Append the remainder of the other list.
    if l1:
        tail.next = l1
    elif l2:
        tail.next = l2
        
    # The new list starts *after* our dummy node
    return dummy_head.next

# --- Usage ---

# List 1: 1 -> 2 -> 4
l1 = ListNode(1)
l1.next = ListNode(2)
l1.next.next = ListNode(4)
print("List 1:")
print_list(l1)

# List 2: 1 -> 3 -> 4
l2 = ListNode(1)
l2.next = ListNode(3)
l2.next.next = ListNode(4)
print("\nList 2:")
print_list(l2)

# Merge
merged_head = merge_two_lists(l1, l2)
print("\nMerged List:")
print_list(merged_head)

Expected Output:

List 1:
1 -> 2 -> 4

List 2:
1 -> 3 -> 4

Merged List:
1 -> 1 -> 2 -> 3 -> 4 -> 4
🚀 Deep Dive With AI Scholar