Arrange the numbers in a circle so that the sum of any two neighboring numbers is prime. The problem is known as the “Prime Circle Problem” and is due to Antonio **Filz (Problem 1046, J. Recr. Math. vol 14, p 64, 1982; vol 15, p 71, 1983)**. It appears in the classic book by **Richard Guy, Unsolved Problems in Number Theory, 2nd edition**.

The prime circle is a Hamiltonian cycle in the bipartite graph made from the edges that exist upon satisfaction of the condition (namely, the sum is prime). In simple English, what this means is that solution of the puzzle is equivalent to finding a **Hamiltonian cycle** (a cycle that visits each vertex exactly once) in a graph with vertices numbered and an edge between two vertices whenever the sum of the vertices is a prime. Here is my solution in Python using the **networkx** library.

```
import networkx as nx
import matplotlib.pyplot as plt
import itertools
from networkx.algorithms.cycles import simple_cycles
from networkx import DiGraph
# Function to check if a number is prime
def is_prime(n):
if n <= 1:
return False
for i in range(2, int(n**0.5) + 1):
if n % i == 0:
return False
return True
# Create an undirected graph
G = nx.Graph()
# Add nodes
G.add_nodes_from(range(1, 21))
# Add edges between nodes if their sum is prime
for u, v in itertools.combinations(G.nodes, 2):
if is_prime(u + v):
G.add_edge(u, v)
# Initialize a variable to store a Hamiltonian cycle
hamiltonian_cycle = None
# Iteratively check each cycle
for cycle in simple_cycles(DG):
if len(cycle) == len(G.nodes):
hamiltonian_cycle = cycle
break # Exit the loop as soon as a Hamiltonian cycle is found
# Plot the graph
pos = nx.circular_layout(G) # Position nodes in a circle
nx.draw(G, pos, with_labels=True, node_color='lightblue', edge_color='red', font_weight='bold')
# Highlight the Hamiltonian cycle if found
if hamiltonian_cycle:
# Ensure the cycle is in the correct order for plotting
hamiltonian_cycle.append(hamiltonian_cycle[0]) # Make it a cycle
nx.draw_networkx_edges(G, pos, edgelist=list(zip(hamiltonian_cycle, hamiltonian_cycle[1:])), width=2, edge_color='green')
print("Hamiltonian cycle found.")
else:
print("No Hamiltonian cycle found.")
plt.show()
```

Two other similar problems where the hamiltonian cycle and path approach works are given below.

Write out the numbers as a sequence so that every pair of neighboring numbers sums to a perfect square. (For example, could be part of the sequence because and .)

Here is a solution using the code below: . The idea is to create a graph such that the sum of a node and any of its neighbours’ is a perfect square and check if the graph contains a Hamiltonian path.

```
import networkx as nx
import math
import matplotlib.pyplot as plt
def is_perfect_square(n):
root = int(math.sqrt(n))
return root * root == n
def find_hamiltonian_path(graph, node, visited, path):
visited[node] = True
path.append(node)
if len(path) == len(graph.nodes()):
return True
for neighbor in graph.neighbors(node):
if not visited[neighbor]:
if find_hamiltonian_path(graph, neighbor, visited, path):
return True
path.pop()
visited[node] = False
return False
def hamiltonian_path(graph):
visited = {node: False for node in graph.nodes()}
path = []
for node in graph.nodes():
if find_hamiltonian_path(graph, node, visited, path):
return path
return None
G = nx.Graph()
G.add_nodes_from(range(1, 17))
for i in G.nodes():
for j in G.nodes():
if i < j and is_perfect_square(i + j):
G.add_edge(i, j)
path = hamiltonian_path(G)
if path:
print("Hamiltonian Path:", path)
else:
print("No Hamiltonian Path found!")
```

Write out the numbers as a sequence so that every pair of neighboring numbers sums to a perfect square and the first and last entries must also sum to a square.

Here is a solution . The idea is to create a graph such that the sum of a node and any of its neighbours’ is a perfect square and check if the graph contains a Hamiltonian cycle.

```
import networkx as nx
import math
import matplotlib.pyplot as plt
def is_perfect_square(n):
root = int(math.sqrt(n))
return root * root == n
G = nx.Graph()
G.add_nodes_from(range(1, 33))
for i in G.nodes():
for j in G.nodes():
if i < j and is_perfect_square(i + j):
G.add_edge(i, j)
def hamiltonian_cycle(graph):
for cycle in nx.simple_cycles(graph):
if len(cycle) == len(graph.nodes):
return cycle
return None
cycle = hamiltonian_cycle(G)
if cycle:
print("Hamiltonian Cycle:", cycle)
else:
print("No Hamiltonian Cycle found!")
```

An ordinary die is rolled until the running total of the rolls first exceeds . What is the most likely final total that will be obtained?

The first solution approach is quite straightforward. We just use *Monte Carlo* simulation to estimate the required probability. From the graph below we can see that the final total is most likely to be with a probability of .

```
import numpy as np
import matplotlib.pyplot as plt
def monte_carlo_simulation(n=100000):
results = []
for _ in range(n):
sum_ = 0
while sum_ <= 12:
sum_ += np.random.randint(1, 7) # Roll a die
results.append(sum_)
return results
def calculate_probabilities(results):
probabilities = {i: results.count(i) / len(results) for i in range(13, 19)}
return probabilities
def plot_probabilities(probabilities):
labels = probabilities.keys()
values = probabilities.values()
fig, ax = plt.subplots()
bars = ax.bar(labels, values, color='skyblue')
ax.set_xlabel('Sum')
ax.set_ylabel('Probability')
ax.set_title('Probabilities of Getting Sums from 13 to 18')
for bar in bars:
height = bar.get_height()
ax.annotate(f'{height:.2f}',
xy=(bar.get_x() + bar.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
plt.show()
n_simulations = 100000
results = monte_carlo_simulation(n=n_simulations)
probabilities = calculate_probabilities(results)
plot_probabilities(probabilities)
```

The second approach is much more interesting. We can count the number of ways of arriving at a total between and such that every number involved in the sum is less than or equal to six and the running total till the second last number is less than or equal to . E.g. is a valid way to reach but not because once you have you have exceed already. We make use of the concept of *partitions* (a partition of a non-negative integer , also called an integer partition, is a way of writing as a sum of positive integers). If we denote by the partition of where each number of the partition does not exceed , then the number of ways of reaching numbers to is given in the table below:

Total | Count |
---|---|

13 | p(13,6) |

14 | p(8,6) + p(9,6) + p(10,6) + p(11,6) + p(12,6) |

15 | p(9,6) + p(10,6) + p(11,6) + p(12,6) |

16 | p(10,6) + p(11,6) + p(12,6) |

17 | p(11,6) + p(12,6) |

18 | p(12,6) |

From the table it is easy to see that the probability of reaching is higher than the probability of reaching the numbers to . We need to compare with to confirm whether or is more likely. Using the code below we see that probability of reaching is and is .

```
def generate_partitions(number, max_part=6):
def recurse(target, max_part, current_partition):
# Base case: when the target number is 0, yield the current partition
if target == 0:
yield list(current_partition)
else:
# Start from the last part used or max_part, whichever is smaller
start = min(max_part, target)
for next_part in range(start, 0, -1):
# Append next part and recurse
yield from recurse(target - next_part, next_part, current_partition + [next_part])
return list(recurse(number, max_part, []))
def calculate_probability(number):
from collections import Counter
from math import factorial
def partition_probability(partition):
counter = Counter(partition)
l = len(partition)
num = factorial(l)
for _,cnt in counter.items():
num = num * 1/factorial(cnt)
return num/6**l
partitions = generate_partitions(number)
total_probability = sum(partition_probability(p) for p in partitions)
return round(total_probability, 2)
print(f"Total probability for 13:", calculate_probability(13))
prob = 0
for n in range(8,13):
prob += calculate_probability(n)/6
print(f"Total probability for 14:", prob)
```

A circle is randomly generated by sampling two points uniformly and independently from the interior of a square and using these points to determine its diameter. What is the probability that the circle has a part of it that is off the square? Give your answer in exact terms.

Let be the picked points and be the midpoint of . Our random circle intersects the square iff the distance of from the boundary of the square is less than the length of or . Thus, assuming that the square is given by and , , we want the probability of the event

with being independent and uniformly distributed random variables over the interval .

We use Monte Carlo simulation to estimate the probability. Using the Python code below, we see that the required probability is .

```
from random import uniform
from math import sqrt
runs = 10000000
cnt = 0
for _ in range(runs):
x_1, x_2, y_1, y_2 = uniform(-1,1), uniform(-1,1), uniform(-1,1), uniform(-1,1)
if min(2-abs(x_1+x_2), 2-abs(y_1+y_2))<= sqrt((x_1-x_2)**2+(y_1-y_2)**2):
cnt += 1
print(cnt/runs)
```

It’s time for a random number duel! You and I will both use random number generators, which should give you random real numbers between and . Whoever’s number is greater wins the duel!

There’s just one problem. I’ve hacked your random number generator. Instead of giving you a random number between and , it gives you a random number between and .

What are your chances of winning the duel?

The random generator that I have generates numbers according to where and the random generator that you have (in the general case) generates numbers according to . We need the probability, . This is given by the area of the trapezium divided by the area of the rectangle . We have , , and in the figure below

In our particular case and , therefore the probability of me winning is .

The probability of me winning the duel as per the simulation below is which validates the result we got earlier.

```
from numpy.random import uniform
def prob_win(p1l, p1h, p2l, p2h, runs = 1000000):
total_wins = 0
for _ in range(runs):
x, y = uniform(p1l, p1h), uniform(p2l, p2h)
if x > y:
total_wins += 1
return total_wins/runs
print(prob_win(0.1,0.8,0,1))
```

I have in my possession 1 million fair coins. Before you ask, these are not legal tender. Among these, I want to find the “luckiest” coin.

I first flip all 1 million coins simultaneously (I’m great at multitasking like that), discarding any coins that come up tails. I flip all the coins that come up heads a second time, and I again discard any of these coins that come up tails. I repeat this process, over and over again. If at any point I am left with one coin, I declare that to be the “luckiest” coin.

But getting to one coin is no sure thing. For example, I might find myself with two coins, flip both of them and have both come up tails. Then I would have zero coins, never having had exactly one coin.

What is the probability that I will — at some point — have exactly one “luckiest” coin?

Let be the probabillity that we will have exactly one “luckiest” coin starting with coins. We have the following recurrence relation:

because the probability of ending up with heads when you flip coins is and then it is equivalent to starting the game with coins. When , .

From the code below, we get . Assuming convergence,

```
from functools import lru_cache
from math import comb
def prob_luckiest_coin(n):
@lru_cache()
def prob(n):
if n == 1:
return 1
else:
total_prob = 0
for i in range(1, n):
total_prob += prob(i)*comb(n,i)
return total_prob/(2**n-1)
return prob(n)
print(prob_luckiest_coin(100))
```

My condo complex has a single elevator that serves four stories: the garage , the first floor , the second floor and the third floor . Unfortunately, the elevator is malfunctioning and stopping at every single floor, no matter what. The elevator always goes etc.

I want to board the elevator on a random floor (with all four floors being equally likely). As I round the corner to approach the elevator, I hear that its doors have closed, but I have no further information about which floor it’s on or whether the elevator is going up or down. The doors might have just closed on my floor, for all I know.

On average, how many stops will the elevator make until it opens on my floor (including the stop on your floor)? For example, if I am waiting on the second floor, and I heard the doors closing on the garage level, then the elevator would open on my floor in two stops.

Extra credit: Instead of four floors, suppose my condo had floors. On average, how many stops will the elevator make until it opens on my floor?

From the simulation below, we see that the average number of stops when there are floors is .

```
from itertools import cycle
from random import choice, randint
def avg_stops(n = 4, runs = 100000):
rotate = lambda l, n: l[-n:] + l[:-n]
floors, elevator_cycle = list(range(n)), list(range(n)) + list(range(n-2,0,-1))
total_stops = 0
for _ in range(runs):
my_floor, start = choice(floors), randint(0, len(elevator_cycle)-1)
for f in cycle(rotate(elevator_cycle, start)):
total_stops += 1
if f == my_floor:
break
return total_stops/runs
print(avg_stops())
```

You are the coach at Riddler Fencing Academy, where your three students are squaring off against a neighboring squad. Each of your students has a different probability of winning any given point in a match. The strongest fencer has a percent chance of winning each point. The weakest has only a percent chance of winning each point. The remaining fencer has a percent probability of winning each point.

The match will be a relay. First, one of your students will face off against an opponent. As soon as one of them reaches a score of , they are both swapped out. Then, a different student of yours faces a different opponent, continuing from wherever the score left off. When one team reaches (not necessarily from the same team that first reached ), both fencers are swapped out. The remaining two fencers continue the relay until one team reaches points.

As the coach, you can choose the order in which your three students occupy the three positions in the relay: going first, second or third. How will you order them? And then what will be your team’s chances of winning the relay?

From the simulation below, we see that the probability of the permutation winning is the highest at .

```
from itertools import permutations
from random import random
def winning_probabilities(runs = 100000):
win_prob = {'w':0.25,'s':0.75, 'm':0.5}
probs = []
for p1,p2,p3 in permutations(win_prob.keys()):
total_wins = 0
for _ in range(runs):
s1, s2 = 0, 0
while (s1 < 15 and s2 < 15):
if (random() < win_prob[p1]):
s1 += 1
else:
s2 += 1
while (s1 < 30 and s2 < 30):
if (random() < win_prob[p2]):
s1 += 1
else:
s2 += 1
while (s1 < 45 and s2 < 45):
if (random() < win_prob[p3]):
s1 += 1
else:
s2 += 1
if s1 == 45:
total_wins += 1
probs.append((p1,p2,p3,total_wins/runs))
return probs
for p1,p2,p3,prob in winning_probabilities():
print(f"Probability of the permutation {(p1,p2,p3)} winning is {prob}")
```

I have a most peculiar menorah. Like most menorahs, it has nine total candles — a central candle, called the shamash, four to the left of the shamash and another four to the right. But unlike most menorahs, the eight candles on either side of the shamash are numbered. The two candles adjacent to the shamash are both the next two candles out from the shamash are the next pair are and the outermost pair are

The shamash is always lit. How many ways are there to light the remaining eight candles so that sums on either side of the menorah are “balanced”? (For example, one such way is to light candles and on one side and candles and on the other side. In this case, the sums on both sides are , so the menorah is balanced.)

The number of ways of lighting the candles satisfying the conditions is . The different ways of lighting the candles is given below:

```
(('l', 1), ('r', 1))
(('l', 2), ('r', 2))
(('l', 3), ('r', 3))
(('l', 4), ('r', 4))
(('l', 1), ('l', 2), ('r', 3))
(('l', 1), ('l', 3), ('r', 4))
(('l', 3), ('r', 1), ('r', 2))
(('l', 4), ('r', 1), ('r', 3))
(('l', 1), ('l', 2), ('r', 1), ('r', 2))
(('l', 1), ('l', 3), ('r', 1), ('r', 3))
(('l', 1), ('l', 4), ('r', 1), ('r', 4))
(('l', 1), ('l', 4), ('r', 2), ('r', 3))
(('l', 2), ('l', 3), ('r', 1), ('r', 4))
(('l', 2), ('l', 3), ('r', 2), ('r', 3))
(('l', 2), ('l', 4), ('r', 2), ('r', 4))
(('l', 3), ('l', 4), ('r', 3), ('r', 4))
(('l', 1), ('l', 2), ('l', 3), ('r', 2), ('r', 4))
(('l', 1), ('l', 2), ('l', 4), ('r', 3), ('r', 4))
(('l', 2), ('l', 4), ('r', 1), ('r', 2), ('r', 3))
(('l', 3), ('l', 4), ('r', 1), ('r', 2), ('r', 4))
(('l', 1), ('l', 2), ('l', 3), ('r', 1), ('r', 2), ('r', 3))
(('l', 1), ('l', 2), ('l', 4), ('r', 1), ('r', 2), ('r', 4))
(('l', 1), ('l', 3), ('l', 4), ('r', 1), ('r', 3), ('r', 4))
(('l', 2), ('l', 3), ('l', 4), ('r', 2), ('r', 3), ('r', 4))
(('l', 1), ('l', 2), ('l', 3), ('l', 4), ('r', 1), ('r', 2), ('r', 3), ('r', 4))
```

```
from itertools import product, combinations
def menorah_lighting(n=4):
side_sum = lambda comb, side: sum([i for s, i in comb if s == side])
candles = list(product(["l","r"], range(1, n+1)))
cnt, lightings = 0, []
for k in range(2, 2*n+1):
for comb in combinations(candles, k):
if side_sum(comb, "l") == side_sum(comb, "r"):
lightings.append(comb)
cnt += 1
return cnt, lightings
cnt, lightings = menorah_lighting()
print(cnt)
for l in lightings:
print(l)
```

I have three dice on my desk that I fiddle with while working, much to the chagrin of my co-workers. For the uninitiated, the is a tetrahedron that is equally likely to land on any of its four faces (numbered through ), the is a cube that is equally likely to land on any of its six faces (numbered through ), and the is an octahedron that is equally likely to land on any of its eight faces (numbered through ).

I like to play a game in which I roll all three dice in “numerical” order: , then and then . I win this game when the three rolls form a strictly increasing sequence (such as , but not ). What is my probability of winning?

Extra credit: Instead of three dice, I now have six dice: and . If I roll all six dice in “numerical” order, what is the probability I’ll get a strictly increasing sequence?

From the simulation below, we see that the probability of the winning with and is and the probability of winning with and is .

```
from random import choice
def prob(dice_num_faces, runs=10000000):
dice = {n:list(range(1, n+1)) for n in dice_num_faces}
cnt_succ = 0
for _ in range(runs):
roll = [choice(dice[d]) for d in sorted(dice.keys())]
cnt_succ += all(i < j for i, j in zip(roll, roll[1:]))
return cnt_succ/runs
print(prob([4,6,8]))
print(prob([4,6,8,10,12,20]))
```

Branch and bound is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

The algorithm depends on efficient estimation of the lower and upper bounds of regions/branches of the search space. If no bounds are available, the algorithm degenerates to an exhaustive search.

The following is the skeleton of a generic branch and bound algorithm for minimizing an arbitrary objective function . To obtain an actual algorithm from this, one requires a bounding function , that computes lower bounds of on nodes of the search tree, as well as a problem-specific branching rule.

Using a heuristic, find a solution to the optimization problem. Store its value, . (If no heuristic is available, set to infinity.) will denote the best solution found so far, and will be used as an upper bound on candidate solutions.

Initialize a queue to hold a partial solution with none of the variables of the problem assigned.

Loop until the queue is empty:

Take a node off the queue.

If represents a single candidate solution and , then is the best solution so far. Record it and set .

Else, branch on to produce new nodes . For each of these:

If , do nothing; since the lower bound on this node is greater than the upper bound of the problem, it will never lead to the optimal solution, and can be discarded.

Else, store on the queue.

Several different queue data structures can be used. This FIFO queue-based implementation yields a breadth-first search. A stack (LIFO queue) will yield a depth-first algorithm. A best-first branch and bound algorithm can be obtained by using a priority queue that sorts nodes on their lower bound. Examples of best-first search algorithms with this premise are Dijkstra’s algorithm and its descendant A* search. The depth-first variant is recommended when no good heuristic is available for producing an initial solution, because it quickly produces full solutions, and therefore upper bounds.

The traveling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?” It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.

For the TSP example below we will use a branch and bound best-first search algorithm to find the shortest tour. The cities involved are and the distance matrix is:

One of the minimum length tours given the above distance matrix is which has a length of .

We use the following lower bounding method to evaluate any partial tour. If the TSP instance has cities and let be a partial tour. Then a lower bound for the length of a tour containing the given partial tour is given by

where is the number of edges yet needed to complete the tour and is the length of the smallest edge between any two cities in the instance.

Here is the Python code which is a straightforward adaptation of the generic version of the BB algorithm to the TSP using priority queue and the bound function described above:

```
from dataclasses import dataclass, field
from math import inf
from queue import PriorityQueue
def bound(node, dist_mat):
d_min = min(d for row in dist_mat for d in row)
n_left = len(dist_mat) - node.level
return n_left * d_min
def length(path, dist_mat):
l = 0
for i in range(len(path[:-1])):
l += dist_mat[path[i]][path[i+1]]
return l
@dataclass(order=True)
class Node:
bound: float
level: int = field(compare=False)
path: list = field(compare=False)
def tsp_bb(dist_mat):
opt_tour = None
n = len(dist_mat)
pq = PriorityQueue()
u = Node(0, 0, [0])
u.bound = bound(u, dist_mat)
pq.put(u)
minlength = inf
while(not pq.empty()):
u = pq.get()
if u.level == n-1:
u.path.append(0)
len_u = length(u.path, dist_mat)
if len_u < minlength:
minlength = len_u
opt_tour = u.path
else:
for i in set(range(0, n))-set(u.path):
u_new = Node(0, u.level + 1, u.path + [i])
u_new.bound = bound(u_new, dist_mat)
if u_new.bound < minlength:
pq.put(u_new)
return opt_tour, length(opt_tour, dist_mat)
dist_mat = [
[inf, 1, 4, 5, 4, 5],
[1, inf, 6, 3, 1, 6],
[4, 6, inf, 1, 1, 5],
[5, 3, 1, inf, 2, 1],
[4, 1, 1, 2, inf, 5],
[5, 6, 5, 1, 5, inf],
]
print(tsp_bb(dist_mat))
```

A group of people join , and each person has a random, percent chance of being friends with each of the other people. Friendship is a symmetric relationship on , so if you’re friends with me, then I am also friends with you.I pick a random person among the — let’s suppose her name is Marcia. On average, how many friends would you expect each of Marcia’s friends to have?

From the simulation below, we see that the expected number of friends of each of Marcia’s friends is . It is interesting to note that Marcia on average would have only friends.

```
import networkx as nx
from random import random, randint, choice
def exp_num_friends(n, runs = 10000):
total_deg = 0
for _ in range(runs):
G = nx.Graph()
for i in range(n):
G.add_node(i)
for i in range(n-1):
for j in range(i+1, n):
if random() < 0.5:
G.add_edge(i,j)
marcia = randint(0,n-1)
marcia_friends = list(G.adj[marcia].keys())
if marcia_friends:
total_deg += G.degree[choice(marcia_friends)]
return total_deg/runs
print(exp_num_friends(101))
```

The sum of the factors of — including itself — is . Coincidentally, inches rounded to the nearest centimeter is … centimeters!

Can you find another whole number like , where you can “compute” the sum of its factors by converting from inches to centimeters?

Extra credit: Can you find a third whole number with this property? How many more whole numbers can you find?

From the code below we see that and are two numbers below that satisify the given property.

```
from functools import reduce
def sum_of_factors(n):
return sum(set(reduce(list.__add__, ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))))
def inches_to_cm_same_as_sum_of_divisors():
nums= []
for i in range(37,1000000):
if round(i*2.54) == sum_of_factors(i):
nums.append(i)
return nums
print(inches_to_cm_same_as_sum_of_divisors())
```

In combinatorial mathematics, a , also called a , is a permutation of the sequence of numbers in which the two s are one unit apart, the two s are two units apart, and more generally the two copies of each number are units apart. Langford pairings are named after C. Dudley Langford, who posed the problem of constructing them in 1958. is the task of finding Langford pairings for a given value of .

Let be the tuple of decision variables indicating the starting and ending position of number in the Langford sequence. The decision variables need to satisfy the following constraints:

Here is the Python code implementing the above model using the fantastic Google library:

```
from ortools.sat.python import cp_model
from collections import defaultdict
def langford_seq_checker(seq):
if not seq:
return False
pos_map = defaultdict(list)
for i, n in enumerate(seq):
pos_map[n].append(i)
for n,p in pos_map.items():
if len(p) != 2:
return False
if (p[1] - p[0]) != n + 1:
return False
return True
def langford_cp(n):
model = cp_model.CpModel()
x = [[model.NewIntVar(1, 2*n, 'x[%i][%i]' % (i,j)) for j in range(2)] for i in range(n)]
model.AddAllDifferent([x[i][j] for i in range(n) for j in range(2)])
for i in range(n):
model.Add(x[i][1] - x[i][0] == i+2)
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE:
out = [0]*(2*n+1)
for i in range(n):
for j in range(2):
out[solver.Value(x[i][j])] = i+1
return out[1:]
else:
return None
```

Here is the sequence calculated using the above code:

`[51, 79, 80, 82, 1, 30, 1, 64, 70, 87, 95, 50, 21, 66, 33, 4, 29, 97, 20, 15, 4, 69, 22, 52, 59, 28, 100, 81, 46, 26, 36, 57, 49, 8, 21, 15, 30, 91, 31, 20, 83, 86, 8, 34, 23, 22, 29, 68, 33, 40, 53, 14, 51, 72, 28, 84, 26, 74, 89, 63, 32, 55, 50, 75, 98, 76, 14, 36, 23, 7, 31, 48, 64, 93, 96, 46, 52, 7, 34, 70, 66, 79, 49, 80, 59, 65, 82, 92, 71, 57, 40, 69, 94, 32, 99, 78, 61, 87, 67, 90, 6, 43, 62, 88, 53, 19, 95, 6, 60, 81, 35, 77, 24, 85, 73, 97, 68, 55, 56, 11, 48, 54, 58, 63, 83, 19, 72, 100, 86, 91, 25, 11, 74, 13, 27, 47, 17, 24, 45, 75, 84, 10, 76, 38, 41, 43, 35, 13, 89, 9, 44, 65, 10, 42, 17, 37, 25, 39, 61, 9, 71, 16, 27, 98, 12, 62, 67, 93, 3, 60, 2, 96, 3, 2, 78, 56, 54, 12, 16, 18, 92, 58, 38, 47, 45, 5, 41, 94, 73, 77, 90, 5, 88, 37, 99, 44, 42, 39, 18, 85]`

Another way to solve the Langford problem is to treat it as a set covering problem. To visualize this we make use of the following array for :

- | 1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|---|

1 | 1 | 1 | ||||

2 | 1 | 1 | ||||

3 | 1 | 1 | ||||

4 | 1 | 1 | ||||

5 | 2 | 2 | ||||

6 | 2 | 2 | ||||

7 | 2 | 2 | ||||

8 | 3 | 3 | ||||

9 | 3 | 3 |

To solve the problem, we need to select one row for the ’s in the sequence, one row for the ’s and one row for the ’s, such that if we stack these rows on top of each other, no column contains more than one number.

In case of it is easy to see that the number of columns in the matrix will be and the number of rows will be . Let be the set of decision variables, one for each row in the matrix such that . We have the following constraints:

- We choose only row among all rows containing the number in the matrix where .
- For each column, we choose only row among all rows containing non zero values.

The Python code implementing the above model using the Google library is given below:

```
from ortools.linear_solver import pywraplp
def langford_ip(n):
solver = pywraplp.Solver.CreateSolver('SCIP')
n_rows, n_cols = sum(range(n-1, 2*n-1)), 2*n
matrix = [[0 for j in range(n_cols)] for i in range(n_rows)]
out = [0 for i in range(n_cols)]
# setting up the covering matrix
j = 0
for i in range(n):
for k in range(2*n-i-2):
matrix[j][k] = i + 1
matrix[j][k+i+2] = i + 1
j += 1
x = [solver.IntVar(0, 1, 'x[%i]' % j) for j in range(n_rows)]
# row constraints
j = 0
for i in range(n):
solver.Add(sum([x[k] for k in range(j, j + 2*n-i-2)])==1)
j += 2*n-i-2
# column constraints
for i in range(n_cols):
inds = []
for j in range(n_rows):
if matrix[j][i]:
inds.append(j)
solver.Add(sum([x[k] for k in inds])==1)
solver.Minimize(sum([x[i] for i in range(n_rows)]))
status = solver.Solve()
if status == pywraplp.Solver.OPTIMAL:
for i in range(n_rows):
if x[i].solution_value():
for j in range(n_cols):
out[j] += matrix[i][j]
return out
else:
return None
```

For the positive cases ( or ) an algorithm for calculating the sequence can be found here. This beautiful algorithm was discovered by Roy Davies in .

Here are the details, where denotes the reversal of a sequence.

If divides , the sequence is .

If , it is .

The Python code implementing the above algorithm is given below

```
from math import ceil
def R(l):
return list(reversed(l))
def langford_davies(n):
x = ceil(n/4)
a, b, c, d = 2*x-1, 4*x-2, 4*x-1, 4*x
p = [i for i in range(1, a) if i % 2==1]
q = [i for i in range(2, a) if i % 2==0]
r = [i for i in range(a+2, b) if i % 2==1]
s = [i for i in range(a+1, b) if i % 2==0]
if n%4 == 0:
return R(s) + R(p) + [b] + p + [c] + s + [d] + R(r) + R(q) + [b,a] + q + [c] + r + [a, d]
if n%4 == 3:
return R(s) + R(p) + [b] + p + [c] + s + [a] + R(r) + R(q) + [b,a] + q + [c] + r
return None
```

I have a spherical pumpkin. I carefully calculate its volume in cubic inches, as well as its surface area in square inches.

But when I came back to my calculations, I saw that my units — the square inches and the cubic inches — had mysteriously disappeared from my calculations. But it didn’t matter, because both numerical values were the same!

What is the radius of my spherical pumpkin?

Extra credit: Let’s dispense with 3D thinking. Instead, suppose I have an -hyperspherical pumpkin. Once again, I calculate its volume (with units ) and surface area (with units ). Miraculously, the numerical values are once again the same! What is the radius of my -hyperspherical pumpkin?

Let be the radius of the spherical pumpkin. We have

The recurrence relation for the surface area of an -ball is given by

If , we have .

Congratulations, you’ve made it to the fifth round of The Squiddler — a competition that takes place on a remote island. In this round, you are one of the remaining competitors who must cross a bridge made up of pairs of separated glass squares. Here is what the bridge looks like from above:

To cross the bridge, you must jump from one pair of squares to the next. However, you must choose one of the two squares in a pair to land on. Within each pair, one square is made of tempered glass, while the other is made of normal glass. If you jump onto tempered glass, all is well, and you can continue on to the next pair of squares. But if you jump onto normal glass, it will break, and you will be eliminated from the competition.

You and your competitors have no knowledge of which square within each pair is made of tempered glass. The only way to figure it out is to take a leap of faith and jump onto a square. Once a pair is revealed — either when someone lands on a tempered square or a normal square — all remaining competitors take notice and will choose the tempered glass when they arrive at that pair.

On average, how many of the competitors will survive and make it to the next round of the competition?

Let be the expected number of survivors when there are competitors and pairs of glasses. We have the recurrence relation

The Python code to compute is given below:

```
def S(n , m):
if n == 0:
return 0
if m == 0:
return n
return 0.5*S(n, m-1) + 0.5*S(n-1, m-1)
```

On average, if there are competitors and pairs of glasses, we will have survivors.

Duke Leto Atreides knows for a fact that there are not one, but two traitors within his royal household. The suspects are Lady Jessica, Dr. Wellington Yueh, Gurney Halleck and Duncan Idaho. Leto’s advisor, Thufir Hawat, will assist him in questioning the four suspects. Anyone who is a traitor will tell a lie, while anyone who is not a traitor will tell the truth.

Upon interrogation, Jessica says that she is not the traitor, while Wellington similarly says that he is not the traitor. Gurney says that Jessica or Wellington is a traitor. Finally, Duncan says that Jessica or Gurney is a traitor. (Thufir, being the logician that he is, notes that when someone says thing A is true or thing B is true, both A and B can technically be true.)

After playing back the interrogations in his mind, Thufir is ready to report the name of one of the traitors to the duke. Whose name does he report?

Let be the boolean variables indicating whether or not Jessica, Wellington, Gurney and Duncan are traitors or not. The statements made by them and the fact that there are traitors reduce to the following set of logical propositions:

- If
- If
- If
- If

Here is the Python code in Z3 to check if the propositions above can be satisfied:

```
from z3 import *
j = Bool('j')
w = Bool('w')
g = Bool('g')
d = Bool('d')
s = Solver()
s.add(Implies(g,Not(Or(j,w))))
s.add(Implies(Not(g),Or(j,w)))
s.add(Implies(d,Not(Or(j,g))))
s.add(Implies(Not(d),Or(j,g)))
s.add(Or([And(j,w), And(j,g), And(j,d), And(w,g), And(w,d), And(g,d)]))
while s.check() == sat:
m = s.model()
print(m)
s.add(Or(j != m[j], g != m[g], w != m[w], d != m[d]))
```

We see that there are two cases where the propositions above are satisfied:

```
[g = False, j = True, w = True, d = False]
[w = True, g = False, j = False, d = True]
```

In both the cases, Wellington is one of the traitors, which means that Thufir reported as one of the traitors to the Duke.

Suppose you have an equilateral triangle. You pick three random points, one along each of its three edges, uniformly along the length of each edge — that is, each point along each edge has the same probability of being selected.

With those three randomly selected points, you can form a new triangle inside the original one. What is the probability that the center of the larger triangle also lies inside the smaller one?

The logic for determining if a point is inside or outside a given triangle is described here.

The Python code for implementing the above logic is given below:

```
import numpy as np
from math import sqrt
from random import random
triangle = [np.array([-.5,-sqrt(3)/6]),
np.array([.5,-sqrt(3)/6]),
np.array([0, sqrt(3)/3])]
center = np.array([0,0])
def exp_percent_center_inside(triangle, center, runs = 100000):
det = np.cross
c=0
for _ in range(runs):
[A, B, C], v = triangle, center
D, E, F = A + random()*(B - A), B + random()*(C - B), C + random()*(A - C)
v0, v1, v2 = D, E - D, F - D
a = (det(v, v2) - det(v0, v2))/det(v1, v2)
b = -((det(v, v1) - det(v0, v1))/det(v1, v2))
if a > 0 and b > 0 and a + b < 1:
c += 1
return c/runs
print(exp_percent_center_inside(triangle, center))
```

From the simulation, we see that the probability the center of the equilateral triangle lies inside the smaller random triangle is .

Flowfree is a puzzle game that is available as an android/ios app and online. The game is played on a square grid with pairs of same-colored squares. The objective is to join each of the same-colored pairs by means of an unbroken chain of squares of the same color. A typical starting position together with the solution is in shown in the figure below:

Define the sets and , where is the size of the grid and is the number of pairs of same-colored cells. Also define variables if cell requires color , otherwise , and parameters if cell has color , otherwise . The conditions required by the puzzle are enforced as follows.

- Ensure the solution is consistent with the starting configuration.

- Each cell contains a single color.

- Ensure a continuous chain for each color. This is done by ensuring that the initially colored squares have exactly one adjacent square of the same color and all other squares will have exactly two adjacent squares of the same color. Define

Here is the code for the above formulation and puzzle:

```
from ortools.sat.python import cp_model
from enum import IntEnum
class Color(IntEnum):
YELLOW = 0,
BROWN = 1,
GREEN = 2,
BLUE = 3
def puzzle():
m, n = 5, 4
s = {(i,j,k):0 for i in range(m) for j in range(m) for k in range(n)}
s[(0, 1, Color.YELLOW)], s[(4, 0, Color.YELLOW)] = 1, 1
s[(0, 4, Color.BROWN)], s[(1, 1, Color.BROWN)] = 1, 1
s[(3, 2, Color.BLUE)], s[(2, 4, Color.BLUE)] = 1, 1
s[(1, 4, Color.GREEN)], s[(4, 4, Color.GREEN)] = 1, 1
return m, n, s
def flowfree(puzzle):
m, n, s = puzzle
model = cp_model.CpModel()
x = {(i,j,k): model.NewIntVar(0, 1, 'x(%i,%i,%i)' % (i,j,k))
for i in range(m) for j in range(m) for k in range(n)}
for i in range(m):
for j in range(m):
for k in range(n):
model.Add(x[(i,j,k)] >= s[(i,j,k)])
for i in range(m):
for j in range(m):
model.Add(sum(x[(i,j,k)] for k in range(n))==1)
M = list(range(m))
for i in range(m):
for j in range(m):
for k in range(n):
f = s[(i,j,k)] + \
sum(x[(p,j,k)] for p in range(i-1, i+2) if p != i and p in M) + \
sum(x[(i,q,k)] for q in range(j-1, j+2) if q != j and q in M)
model.Add(f >= 2*x[(i,j,k)]-5*(1-x[(i,j,k)]))
model.Add(f <= 2*x[(i,j,k)]+5*(1-x[(i,j,k)]))
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE:
for i in range(m):
print(" ".join(str(Color(k)).ljust(15) for j in range(m) for k in range(n)
if solver.Value(x[(i,j,k)]) != 0))
else:
print("Couldn't solve.")
flowfree(puzzle())
```

The output of the above program is as follows:

```
Color.YELLOW Color.YELLOW Color.BROWN Color.BROWN Color.BROWN
Color.YELLOW Color.BROWN Color.BROWN Color.GREEN Color.GREEN
Color.YELLOW Color.GREEN Color.GREEN Color.GREEN Color.BLUE
Color.YELLOW Color.GREEN Color.BLUE Color.BLUE Color.BLUE
Color.YELLOW Color.GREEN Color.GREEN Color.GREEN Color.GREEN
```

The Canterbury Puzzles is a delightful collection of posers based on the exploits of the same group of pilgrims introduced by Geoffrey Chaucer in The Canterbury Tales. The anthology was compiled by the English puzzlist Henry Ernest Dudeney and first published in 1907. All the puzzles are mathematical in nature and many of them may be used to illustrate O.R. techniques. The following riddle, taken from the chapter entitled ‘The Merry Monks of Riddlewell’ is a classical I.P. allocation problem.

One day, when the monks were seated at their repast, the Abbot announced that a messenger had that morning brought news that a number of pilgrims were on the road and would require their hospitality. “You will put them,” he said, “in the square dormitory that has two floors with eight rooms on each floor. There must be eleven persons sleeping on each side of the building, and twice as many on the upper floor as the lower floor. Of course every room must be occupied, and you know my rule that not more than three persons may occupy the same room.” I give a plan of the two floors, from which it will be seen that the sixteen rooms are approached by a well staircase in the centre. After the monks had solved this little problem of accommodation, the pilgrims arrived, when it was found that they were three more in number than was at first stated. This necessitated a reconsideration of the question, but the wily monks succeeded in getting over the new difficulty without breaking the Abbot’s rules. The curious point of this puzzle is to discover the total number of pilgrims.

The monks were required to perform two allocations of pilgrims each fulfilling the Abbot’s requirements and with a difference of three pilgrims in total between each allocation. On their behalf, we therefore define variables as follows

where (allocations), (floors), (rows) and (columns).

Maximize/minimize the number of pilgrims in the final allocation. This is to demonstrate that the solution to the puzzle is unique.

- Three more pilgrims in final allocation than in initial allocation

- Twice as many pilgrims on upper floor than lower floor in both allocations

- Eleven pilgrims in first and third rows (i.e. front and back sides)

- Eleven pilgrims in first and third columns (i.e. left and right sides)

- Each room is atleast oocupied by at least one and no more than three pilgrims

- No pilgrims allocated to the center cells (i.e. well stair case)

The Python code for solving the puzzle using Google OR-Tools library is given below:

```
from ortools.linear_solver import pywraplp
def riddle_of_pilgrims():
n, f, r, c = 2, 2, 3, 3
solver = pywraplp.Solver.CreateSolver('SCIP')
x = {(i,j,k,m): solver.IntVar(0, 3, 'x[%i][%i][%i][%i]' % (i,j,k,m))
for m in range(c) for k in range(r)
for j in range(f) for i in range(n)}
solver.Add(sum([x[(0,j,k,m)] for j in range(f)
for k in range(r) for m in range(c)]) + 3 ==
sum([x[(1,j,k,m)] for j in range(f)
for k in range(r) for m in range(c)]))
for i in range(n):
solver.Add(2*sum([x[(i,0,k,m)] for k in range(r) for m in range(c)]) ==
sum([x[(i,1,k,m)] for k in range(r) for m in range(c)]))
for i in range(n):
for k in set(range(r)) - {1}:
solver.Add(sum([x[(i,j,k,m)] for j in range(f) for m in range(c)]) == 11)
for i in range(n):
for m in set(range(c)) - {1}:
solver.Add(sum([x[(i,j,k,m)] for j in range(f) for k in range(r)]) == 11)
for i in range(n):
for j in range(f):
for k in set(range(c)):
for m in set(range(c)):
if (k,m) != (1,1):
solver.Add(x[(i,j,k,m)] >= 1)
solver.Add(x[(i,j,k,m)] <= 3)
for i in range(n):
for j in range(f):
solver.Add(x[(i,j, 1, 1)] == 0)
solver.Minimize(sum([x[(1,j,k,m)] for j in range(f)
for k in range(r) for m in range(c)]))
status = solver.Solve()
if status == pywraplp.Solver.OPTIMAL:
return solver.Objective().Value()
else:
return -1
print(riddle_of_pilgrims())
```

Using the code above, we see that the total number of pilgrims is .

In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set of vertices such that for every two vertices in , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in . A set is independent if and only if it is a clique in the graph’s complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called “internally stable sets”, of which “stable set” is a shortening. The independent set decision problem is -complete, and hence it is not believed that there is an efficient algorithm for solving it.

A maximal independent set is an independent set that is not a proper subset of any other independent set. A maximum independent set is an independent set of largest possible size for a given graph . This size is called the independence number of and is usually denoted by . The optimization problem of finding such a set is called the maximum independent set problem. It is a strongly NP-hard problem. As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph. Every maximum independent set also is maximal, but the converse implication does not necessarily hold.

For a graph , the integer programming model for finding the vertices that belong to the maximal independent set is as follows

The Boolean Satisfiability Problem or in other words SAT is the first problem that was shown to be NP-Complete. A given boolean expression, determining if there exists a truth assignment to its variables for which the expression is true is defined as the satisfiability problem.

When we’re discussing the SAT problem, it’s essential to know about the conjunctive normal form (CNF). A boolean expression is said to be in CNF form if it’s a conjunction of a set of clauses, where each clause is defined as a disjunction (logical OR) of literals. We can define a literal as a variable or a negation of a variable.

Every boolean expression can be expressed as a CNF by repeatedly using De Morgan’s Laws, the law of double negation and distributive laws under AND and OR:

where,

The above CNF is a -CNF, that is, it has a maximum of literals in every clause. We pick each literal from a set of variables:

In the 3-SAT problem, each clause has at most literals. This problem is NP-Complete. The 3-SAT problem is part of the Karp’s NP-complete problems and is used as the starting point to prove that the other problems are also NP-Complete. One example is the independent set problem.

will have one vertex for each literal in a clause.

Connect the -literals in a clause to form a triangle; the independent set will pick at most one vertex from each clause, which will correspond to the literal to be set to true.

Connect vertices if they label complementary literals; this ensures that the literals corresponding to the independent set do not have a conflict.

Take to be the number of clauses.

If . The graph corresponding to the given -SAT instance is given below

The corresponding incidence matrix of the graph can be found here.

The *incidence matrix* of graph on vertices and edges is an matrix such that

The model for finding the maximal indepedent set using the incidence matrix is given by:

The code for solving the -SAT problem by converting it into a graph (represented by an incidence matrix) using the reduction process mentioned above is given below

```
import gurobipy as gp
from gurobipy import GRB, quicksum
def inc_mat(filepath):
import re
with open(filepath,'r') as f:
return [[int(i) for i in re.findall(r'-?\d+', line)] for line in f]
def max_ind_set(inc_mat):
n_vertices, n_edges = len(inc_mat), len(inc_mat[0])
isp = gp.Model("ISP")
x = isp.addVars(range(n_vertices), vtype = GRB.BINARY, name ="x")
isp.addConstrs((quicksum([inc_mat[i][j]*x[i] for i in range(n_vertices)]) <= 1
for j in range(n_edges)))
obj = quicksum([x[i] for i in range(n_vertices)])
isp.setObjective(obj, GRB.MAXIMIZE)
isp.optimize()
if isp.status == GRB.OPTIMAL:
return [int(x[i].x) for i in range(n_vertices)]
else:
return None
FILEPATH = "incidence_matrix.txt"
print(max_ind_set(inc_mat(FILEPATH)))
```

The output is corresponding to choosing in the first clause, in the second clause, in the third clause, in the fourth clause.

In computer science, the clique problem is the computational problem of finding cliques (subsets of vertices, all adjacent to each other, also called complete subgraphs) in a graph. Common formulations of the clique problem include finding a maximum clique (a clique with the largest possible number of vertices), finding a maximum weight clique in a weighted graph, listing all maximal cliques (cliques that cannot be enlarged), and solving the decision problem of testing whether a graph contains a clique larger than a given size.

A maximal clique is a set of vertices that induces a complete subgraph, and that is not a subset of the vertices of any larger complete subgraph. That is, it is a set such that every pair of vertices in is connected by an edge and every vertex not in is missing an edge to at least one vertex in . A graph may have many maximal cliques, of varying sizes; finding the largest of these is the maximum clique problem.

The independent set problem and the clique problem are complementary: a clique in is an independent set in the complement graph of and vice versa. If is a maximal independent set in some graph, it is a maximal clique or maximal complete subgraph in the complementary graph.

Therefore, we can reuse the code for determining the maximal independent set of a graph. All we needs is an additional function that returns the incidence matrix of the complementary graph given the original incidence matrix. The code for solving the maximal clique problem given an incidence matrix is given below.

Find the maximal clique in the graph below:

The incidence matrix for the graph is

```
import gurobipy as gp
from gurobipy import GRB, quicksum
from itertools import combinations
def comp_inc_mat(inc_mat):
def mat_t(mat):
n_r, n_c = len(mat), len(mat[0])
return [[mat[j][i] for j in range(n_r)] for i in range(n_c)]
n_v, inc_mat_t, c_inc_mat = len(inc_mat), mat_t(inc_mat), []
for j, k in combinations(range(n_v),2):
r = [0 for _ in range(n_v)]
r[j], r[k] = 1, 1
if r not in inc_mat_t:
c_inc_mat.append(r)
return mat_t(c_inc_mat)
def max_ind_set(inc_mat):
n_vertices, n_edges = len(inc_mat), len(inc_mat[0])
isp = gp.Model("ISP")
x = isp.addVars(range(n_vertices), vtype = GRB.BINARY, name ="x")
isp.addConstrs((quicksum([inc_mat[i][j]*x[i] for i in range(n_vertices)]) <= 1
for j in range(n_edges)))
obj = quicksum([x[i] for i in range(n_vertices)])
isp.setObjective(obj, GRB.MAXIMIZE)
isp.optimize()
if isp.status == GRB.OPTIMAL:
return [int(x[i].x) for i in range(n_vertices)]
else:
return None
inc_mat_1 = [[0, 0 , 0 , 0 , 0 , 0 , 1 , 0],
[0, 0 , 1 , 0 , 0 , 0 , 0 , 1],
[0, 1 , 0 , 1 , 0 , 0 , 0 , 0],
[1, 1 , 1 , 0 , 0 , 1 , 0 , 0],
[0, 0 , 0 , 0 , 1 , 1 , 1 , 0],
[1, 0 , 0 , 1 , 1 , 0 , 0 , 1]]
print(max_ind_set(comp_inc_mat(inc_mat_1)))
```

The output is , the set of vertices in the maximal independent set in the complementary graph which is the same as the set of vertices in the maximal clique in the original graph.

While watching batter spread out on my waffle iron, and thinking back to a recent conversation I had with Friend-of-The-Riddler™ Benjamin Dickman, I noticed the following sequence of numbers:

Before you ask — yes, you can find this sequence on the On-Line Encyclopedia of Integer Sequences. However, for the full Riddler experience, I urge you to not look it up. See if you can find the next few terms in the sequence, as well as the pattern.

Now, for the actual riddle: Once you’ve determined the pattern, can you figure out the average value of the entire sequence?

Let the sequence be denoted by . You can find more about the sequence here 😊. Then represents the number of ways an integer can be expressed as a sum of two squares (positive, negative, or zero). That is, denotes the number of solutions in integers to the equation . For example, since the solutions to are , , , , , , , and . Because whenever has the form , is a very erratic function.Thankfully, the problem is about the average value of as . If we define to be the number of solutions in integers to , then the average of for is

The code (using brute force) for calculating and is given below:

```
from math import sqrt, ceil
def t(n):
t, l = 0, ceil(sqrt(n))+1
for i in range(0, n+1):
for j in range(-l, l):
for k in range(-l, l):
if j**2 + k**2 == i:
t += 1
return t, t/(n+1)
print(list(map(t, [1,2,3,4,5,10,20,50,100])))
```

Here is a table of and for a few values of :

1 | 2 | 3 | 4 | 5 | 10 | 20 | 50 | 100 | |
---|---|---|---|---|---|---|---|---|---|

5 | 9 | 9 | 13 | 21 | 37 | 69 | 161 | 317 | |

2.5 | 3 | 2.25 | 2.6 | 3.5 | 3.36 | 3.29 | 3.16 | 3.15 |

$ _{n } = $.

The proof from the reference below is based on a geometric interpretation of , and is due to Carl Friedrich Gauss : is the number of points in or on a circle with integer coordinates. For example, since the circle centered at the origin of radius contains lattice points as illustrated in the figure below:

If we draw a square with area centered at each of the points, then the total area (in grey) of the squares is also . Thus we would expect the area of the squares to be approximately the area of the circle, or in general, to be approximately . If we expand the circle of radius by half the length of the diagonal of a square of area , then the expanded circle contains all the squares. If we contract the circle by the same amount, then the contracted circle would be contained in the union of all the squares as seen in the figure below:

Thus,

Dividing each term by and applying the squeeze theorem for limits yields the desired result.

The finals of the sport climbing competition has eight climbers, each of whom compete in three different events: speed climbing, bouldering and lead climbing. Based on their time and performance, each of the eight climbers is given a ranking (first through eighth, with no ties allowed) in each event, as well as a corresponding score ( through , respectively).

The three scores each climber earns are then multiplied together to give a final score. For example, a climber who placed second in speed climbing, fifth in bouldering and sixth in lead climbing would receive a score of , or points. The gold medalist is whoever achieves the lowest final score among the eight finalists.

What is the highest (i.e., worst) score one could achieve in this event and still have a chance of winning (or at least tying for first place overall)?

From the below, we see that the worst score one could achieve and still have a chance of winning is .

```
from random import shuffle
from operator import mul
from functools import reduce
def max_min_score(np, ne, runs = 1000000):
max_min_score = 0
for _ in range(runs):
ranks = [list(range(1,np+1)) for i in range(ne)]
for i in range(ne):
shuffle(ranks[i])
scores = [reduce(mul, [ranks[j][i] for j in range(ne)]) for i in range(np)]
min_score = min(scores)
if min_score > max_min_score:
max_min_score = min_score
return max_min_score
print(max_min_score(8,3))
```

There are people available to carry out jobs. Each person is assigned to carry out exactly one job. Some individuals are better suited to particular jobs than others, so there is an estimated cost if person is assigned to job . The problem is to find a minimum cost assignment.

Definition of the variables.

if person does job , and otherwise.

Definition of the constraints.

Each person does one job:

Each job is done by one person:

The variables are :

Definition of the objective function.

The cost of the assignment is minimized:

In the example there are five workers (numbered 0-4) and four tasks (numbered 0-3). Note that there is one more worker than in the example in the Overview.

The costs of assigning workers to tasks are shown in the following table.

0 | 1 | 2 | 3 | |
---|---|---|---|---|

0 | 90 | 80 | 75 | 70 |

1 | 35 | 85 | 55 | 65 |

2 | 125 | 95 | 90 | 95 |

3 | 45 | 110 | 95 | 115 |

4 | 50 | 100 | 90 | 100 |

The problem is to assign each worker to at most one task, with no two workers performing the same task, while minimizing the total cost. Since there are more workers than tasks, one worker will not be assigned a task.

Using the code below we get the following

Total cost =

Worker assigned to task . Cost = .

Worker assigned to task . Cost = .

Worker assigned to task . Cost = .

Worker assigned to task . Cost = .

```
import gurobipy as gp
from gurobipy import GRB
costs = [
[90, 80, 75, 70],
[35, 85, 55, 65],
[125, 95, 90, 95],
[45, 110, 95, 115],
[50, 100, 90, 100],
]
num_workers = len(costs)
num_tasks = len(costs[0])
m = gp.Model('GC')
x = m.addVars([(i,j) for i in range(num_workers) for j in range(num_tasks)], vtype = GRB.BINARY, name="assignment")
m.addConstrs((sum([x[(i,j)] for j in range(num_tasks)]) <= 1 for i in range(num_workers)))
m.addConstrs((sum([x[(i,j)] for j in range(num_workers)]) == 1 for j in range(num_tasks)))
m.setObjective(sum(costs[i][j] * x[(i, j)] for j in range(num_tasks) i in range(num_workers)), GRB.MINIMIZE)
m.optimize()
for v in m.getVars():
if v.x:
print(v.varName, v.x)
```

Here is the output from Gurobi

```
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 9 rows, 20 columns and 40 nonzeros
Model fingerprint: 0xd44ba4c8
Variable types: 0 continuous, 20 integer (20 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [4e+01, 1e+02]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+00]
Found heuristic solution: objective 385.0000000
Presolve time: 0.00s
Presolved: 9 rows, 20 columns, 40 nonzeros
Variable types: 0 continuous, 20 integer (20 binary)
Root relaxation: objective 2.650000e+02, 7 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 265.0000000 265.00000 0.00% - 0s
Explored 0 nodes (7 simplex iterations) in 0.03 seconds
Thread count was 12 (of 12 available processors)
Solution count 2: 265 385
Optimal solution found (tolerance 1.00e-04)
Best objective 2.650000000000e+02, best bound 2.650000000000e+02, gap 0.0000%
assignment[0,3] 1.0
assignment[1,2] 1.0
assignment[2,1] 1.0
assignment[3,0] 1.0
Total Cost: 265.0
```

Graph coloring belongs to the classical optimization problems and has been studied for a long time. We consider the vertex coloring problem in which colors are assigned to the vertices of a graph such that no two adjacent vertices get the same color and the number of colors is minimized. The minimum number of colors needed for a given graph is called its and denoted by . Computing the chromatic number of a graph is NP-hard.Graph coloring problem has many applications, e.g., register allocation, scheduling, frequency assignment and timetabling etc.

The classical ILP model for a graph is based on assigning color to vertex . For this, it introduces assignment variables for each vertex and color , with if vertex is assigned to color and otherwise. is an upper bound of the number of colors (e.g., the result of a heuristic) and is at most . For modelling the objective function, additional binary variables are needed which get value if and only if color is used in the coloring . The model is given by:

The objective function minimizes the number of used colors. Equation ensures that each vertex receives exactly one color. For each edge there is a constraint making sure that adjacent vertices receive different colors. This model has the advantage that it is simple and easy to use. It can be easily extended to generalizations and/or restricted variants of the graph coloring problem. Since the number of variables is quadratic in (it is bounded by and the number of constraints is cubic in (exactly constraints of type and ), it can directly be used as input for a standard ILP solver.

Given is set of people, , and the task is to group them. However, people who have conflicts between each other cannot be assigned to the same group. Let graph represent this scenario. Edges of this graph represent conflicts. Figure below illustrates a sample scenario. Note that any two people who are connected by an edge are not assigned the same group. Formulate this as an integer programming problem.

Using the code below we see that only colours are required to color the graph. The coloring is shown below

```
import gurobipy as gp
from gurobipy import GRB
from itertools import product
# list of vertices
V = list(range(1,11))
# list of edges
E = [(1,2),(2,10),(9,8),(10,6),(8,6),(3,6),(3,4),(4,6),(5,6),(3,5),(6,7)]
# create the model
m = gp.Model('GC')
# declare variables for colors used
w = m.addVars(V, vtype = GRB.BINARY, name="colors")
# declare variables for vertex colors
x = m.addVars(list(product(V,V)), vtype=GRB.BINARY, name="vertex_colors")
# constraint - one color per vertex
m.addConstrs((sum([x[(v,i)] for i in V])==1 for v in V), name='one_vertex_color')
# constraint - different colors for vertices connected by an edge
m.addConstrs((x[(u,i)] + x[(v,i)] <= w[i] for (u,v) in E for i in V), name='edge_diff_color')
# objective - minimizing the number of colors used
m.setObjective(sum(w[i] for i in V), GRB.MINIMIZE)
m.optimize()
# printing the values of decision variables
for v in m.getVars():
if v.x:
print(v.varName, v.x)
```

Here is the output from Gurobi

```
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 120 rows, 110 columns and 430 nonzeros
Model fingerprint: 0x65c24c79
Variable types: 0 continuous, 110 integer (110 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+00]
Found heuristic solution: objective 6.0000000
Presolve removed 30 rows and 0 columns
Presolve time: 0.00s
Presolved: 90 rows, 110 columns, 360 nonzeros
Variable types: 0 continuous, 110 integer (110 binary)
Root relaxation: objective 3.000000e+00, 53 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 3.0000000 3.00000 0.00% - 0s
Explored 0 nodes (53 simplex iterations) in 0.03 seconds
Thread count was 12 (of 12 available processors)
Solution count 2: 3 6
Optimal solution found (tolerance 1.00e-04)
Best objective 3.000000000000e+00, best bound 3.000000000000e+00, gap 0.0000%
colors[1] 1.0
colors[4] 1.0
colors[5] 1.0
vertex_colors[1,1] 1.0
vertex_colors[2,4] 1.0
vertex_colors[3,1] 1.0
vertex_colors[4,5] 1.0
vertex_colors[5,5] 1.0
vertex_colors[6,4] 1.0
vertex_colors[7,1] 1.0
vertex_colors[8,1] 1.0
vertex_colors[9,5] 1.0
vertex_colors[10,1] 1.0
```

There is a budget available for investment in projects during the coming year and projects are under consideration, where is the outlay for project and is its expected return. The goal is to choose a set of projects so that the budget is not exceeded and the expected return is maximized.

Definition of the variables.

if project is selected, and otherwise.

Definition of the constraints.

The budget cannot be exceeded:

$ _{j=1}^n a_j x_j ≤ b$.

The variables are :

$x_j {0, 1} $

Definition of the objective function.

The expected return is maximized:

.

You are given a knapsack of units capacity and a set of items. Each item is represented by the tuple of the form (item number, value, weight). The following are the items: . Identify the highest valued sub-collection of items that can be fit inside the knapsack.

Using the code below, we see that items , and need to be selected for a maximum value of .

```
import gurobipy as gp
from gurobipy import GRB
items = [(1,5,4), (2,6,2), (3,2,6), (4,5,8), (5,3,5)]
m = gp.Model('KS')
x = m.addVars([i for i,_,_ in items], vtype = GRB.BINARY, name="selected")
m.addConstr(sum(x[i]*w for (i,_,w) in items) <= 16, name='weight')
m.setObjective(sum(x[i]*v for i,v,_ in items), GRB.MAXIMIZE)
m.optimize()
for v in m.getVars():
if v.x:
print(v.varName, v.x)
print(f'Total Value: {m.objVal}')
```

Here is the output from Gurobi

```
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 1 rows, 5 columns and 5 nonzeros
Model fingerprint: 0xad9237f2
Variable types: 0 continuous, 5 integer (5 binary)
Coefficient statistics:
Matrix range [2e+00, 8e+00]
Objective range [2e+00, 6e+00]
Bounds range [1e+00, 1e+00]
RHS range [2e+01, 2e+01]
Found heuristic solution: objective 13.0000000
Presolve removed 1 rows and 5 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Explored 0 nodes (0 simplex iterations) in 0.02 seconds
Thread count was 1 (of 12 available processors)
Solution count 2: 16 13
Optimal solution found (tolerance 1.00e-04)
Best objective 1.600000000000e+01, best bound 1.600000000000e+01, gap 0.0000%
selected[1] 1.0
selected[2] 1.0
selected[4] 1.0
Total Value: 16.0
```

Given a certain number of regions, the problem is to decide where to install a set of emergency service centers. For each possible center, the cost of installing a service center and which regions it can service are known. For instance, if the centers are fire stations, a station can service those regions for which a fire engine is guaranteed to arrive on the scene of a fire within eight minutes. The goal is to choose a minimum cost set of service centers so that each region is covered. Let be the set of regions, and the set of potential centers. Let be the regions that can be serviced by a center at and its installation cost. We obtain the problem:

To facilitate the description, we first construct a incidence matrix such that if and , otherwise.

Definition of the variables.

if center is selected, and = 0 otherwise.

Definition of the constraints.

At least one center must service region :

.

The variables are :

.

Definition of the objective function.

The total cost is minimized:

.

Let’s consider a simple example where we assign cameras at different locations. Each location covers some areas of stadiums, and our goal is to put the least amount of cameras such that all areas of stadiums are covered. We have stadium areas from to , and possible camera locations from to .

The stadium areas that the cameras can cover are given in table below:

Camera Location | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|

Stadium Area | 1,3,4,6,7 | 4,7,8,12 | 2,5,9,11,13 | 1,2,14,15 | 3,6,10,12,14 | 8,14,15 | 1,2,6,11 | 1,2,4,6,8,12 |

Here and .

We can then represent the above information using binary values. If the stadium area can be covered with camera location , then we have . If not,. For instance, stadium area is covered by camera location , so , while stadium area is not covered by camera location , so . The binary variables values are given in the table below:

Camera1 | Camera2 | Camera3 | Camera4 | Camera5 | Camera6 | Camera7 | Camera8 | |
---|---|---|---|---|---|---|---|---|

Stadium1 | 1 | 1 | 1 | 1 | ||||

Stadium2 | 1 | 1 | 1 | 1 | ||||

Stadium3 | 1 | 1 | ||||||

Stadium4 | 1 | 1 | 1 | |||||

Stadium5 | 1 | |||||||

Stadium6 | 1 | 1 | 1 | 1 | ||||

Stadium7 | 1 | 1 | ||||||

Stadium8 | 1 | 1 | 1 | |||||

Stadium9 | 1 | |||||||

Stadium10 | 1 | |||||||

Stadium11 | 1 | 1 | ||||||

Stadium12 | 1 | 1 | 1 | |||||

Stadium13 | 1 | |||||||

Stadium14 | 1 | 1 | 1 | |||||

Stadium15 | 1 | 1 |

We introduce the binary variable to indicate if a camera is installed at location . if camera is installed at location , while if not.

Our objective now is to minimize

subject to the constraints .

Using the code below, we see that at a minimum cameras(e.g. , , and ) are required.

```
import gurobipy as gp
from gurobipy import GRB
a = [[1,0,0,1,0,0,1,1],
[0,0,1,1,0,0,1,1],
[1,0,0,0,1,0,0,0],
[1,1,0,0,0,0,0,1],
[0,0,1,0,0,0,0,0],
[1,0,0,0,1,0,1,1],
[1,1,0,0,0,0,0,0],
[0,1,0,0,0,1,0,1],
[0,0,1,0,0,0,0,0],
[0,0,0,0,1,0,0,0],
[0,0,1,0,0,0,1,0],
[0,1,0,0,1,0,0,1],
[0,0,1,0,0,0,0,0],
[0,0,0,1,1,1,0,0],
[0,0,0,1,0,1,0,0]]
n_cams, n_stads = 8, 15
m = gp.Model('SC')
x = m.addVars(list(range(1, n_cams+1)), vtype = GRB.BINARY, name="camera")
m.addConstrs((sum(a[i][j]*x[j+1] for j in range(n_cams)) >=1 for i in range(n_stads)),
name='stadium_coverage')
m.setObjective(sum(x[i+1] for i in range(n_cams)), GRB.MINIMIZE)
m.optimize()
for v in m.getVars():
if v.x:
print(v.varName, v.x)
print(f'Total Value: {m.objVal}')
```

Here is the output from Gurobi

```
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 15 rows, 8 columns and 36 nonzeros
Model fingerprint: 0x098bdca7
Variable types: 0 continuous, 8 integer (8 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+00]
Found heuristic solution: objective 5.0000000
Presolve removed 11 rows and 3 columns
Presolve time: 0.00s
Presolved: 4 rows, 5 columns, 10 nonzeros
Variable types: 0 continuous, 5 integer (5 binary)
Root relaxation: objective 4.000000e+00, 4 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 4.0000000 4.00000 0.00% - 0s
Explored 0 nodes (4 simplex iterations) in 0.03 seconds
Thread count was 12 (of 12 available processors)
Solution count 2: 4 5
Optimal solution found (tolerance 1.00e-04)
Best objective 4.000000000000e+00, best bound 4.000000000000e+00, gap 0.0000%
camera[2] 1.0
camera[3] 1.0
camera[4] 1.0
camera[5] 1.0
Total Value: 4.0
```

John Dupont is attending a summer school where he must take four courses per day. Each course lasts an hour, but because of the large number of students, each course is repeated several times per day by different teachers. Section of course denoted meets at the hour , where courses start on the hour between a.m. and p.m. John’s preferences for when he takes courses are influenced by the reputation of the teacher, and also the time of day. Let be his preference for section .Unfortunately, due to conflicts, John cannot always choose the sections he prefers.

Formulate an integer programto choose a feasible course schedule that maximizes the sum of John’s preferences.

Modify the formulation so that John never has more than two consecutive hours of classes without a break.

Modify the formulation so that John chooses a schedule in which he starts his day as late as possible.

Let the number of courses be , the maximum number of sections per course be and let the number of time periods in a day be .

Input.

Let if , i.e. there is a class of course for section at time or otherwise.

Definition of the variables.

if section of course is selected at time period and otherwise.

Definition of the constraints.

At most one class can be taken in a given time period. where , and .

At most one section can be taken per course in a day. where , and .

Four courses must be taken in a day. where , and .

Definition of the objective function.

The sum of preferences should be maximized:

where , and .

If John can never have more than two consecutive hours of classes, he cannot take more than two classes in any three consecutive time periods. We need the following addtional constraint:

where , and .

If John chooses a schedule where he starts his day as late as possible, this is equivalent to maximizing the sum of the starting times. The objective function should be modified as follows:

where , and .

Consider a factory producing a number of different goods (say goods ). These goods use different raw resources (say resources ). Suppose that the decision maker observes that the amounts of raw resources are . Each unit of resource costs . Producing each unit of good requires units of resource , units of resource and units of resource . Finally, each unit of good can be sold for dollars. Consequently, the profit for each unit of good produced is

As the operator of this factory, the decision maker, one must decide how many units of each good to produce in order to maximize total profit.

To use linear programming, we first describe our problem with a number of linear functions. We also assume that we can produce fractional units of each product. The inputs are the values and , the decisions or outputs are the quantities of each good to produce.

The first linear function describes the objective of the decision maker:

which is the total profit.

Next, we have linear functions describing the amounts of each of the resources required:

The standard form of presenting the problem to solve is

A factory makes products and , which makes profit of and respectively per unit. units of cement and units of sand are utilized for producing a unit of type whereas units of cement and units of sand are required to produce a unit of type . The total units of cement used must be less than or equal to and the the total units of sand used must be less than or equal to . Compute the number of units of each product that must be produced in order to maximize the profit.

Let and be the number of units of product and that are produced.

We need to maximize

subject to the constraints

Using the code below we see that units of product and units of product need to be produced to obtain a maximum profit of .

```
import gurobipy as gp
from gurobipy import GRB
# Resources
R = ['Sand', 'Cement']
# Products
P = ['A', 'B']
# resource required by product
product_resource = {
('A', 'Cement'): 2,
('B', 'Cement'): 2,
('A', 'Sand'): 7,
('B', 'Sand'): 5
}
# total resources available
resources_avbl = {
'Cement': 11,
'Sand': 34
}
# profit by product
profit = {
'A': 8,
'B': 6
}
# creating the model
m = gp.Model('RAP')
# adding the decision variables for each product
x = m.addVars(P, name="produce")
# declaring resource constraints
m.addConstrs((sum(x[p]*product_resource[(p, r)] for p in P) <= resources_avbl[r] for r in R), name='resource')
# setting the objective
m.setObjective(x.prod(profit), GRB.MAXIMIZE)
# optimizing the model
m.optimize()
# printing the values of each decision variable
for v in m.getVars():
if v.x > 1e-6:
print(v.varName, v.x)
# printing the value of the objective function
print(f'Total Profit: {m.objVal}')
```

Here is the output from Gurobi

```
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Model fingerprint: 0xe9bff615
Coefficient statistics:
Matrix range [2e+00, 7e+00]
Objective range [6e+00, 8e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+01, 3e+01]
Presolve time: 0.01s
Presolved: 2 rows, 2 columns, 4 nonzeros
Iteration Objective Primal Inf. Dual Inf. Time
0 1.4000000e+31 3.500000e+30 1.400000e+01 0s
2 3.9500000e+01 0.000000e+00 0.000000e+00 0s
Solved in 2 iterations and 0.01 seconds
Optimal objective 3.950000000e+01
produce[A] 3.25
produce[B] 2.25
Total Profit: 39.5
```

I recently came across a rather peculiar recipe for something called Babylonian radish pie. Intrigued, I began to follow the directions, which said I could start with any number of cups of flour.

Any number? I mean, I had to start with some flour, so zero cups wasn’t an option. But according to the recipe, any positive value was fair game. Next, I needed a second amount of flour that was 3 divided by my original number. For example, if I had started with two cups of flour, then the recipe told me I now needed divided by , or , cups at this point.

I was then instructed to combine these amounts of flour and discard half. Apparently, this was my new starting amount of flour. I was to repeat the process, combining this amount with divided by it and then discarding half.

The recipe told me to keep doing this, over and over. Eventually, I’d have the proper number of cups of flour for my radish pie.

How many cups of flour does the recipe ultimately call for?

In the limit after convergence, let be number of cups of flour required as per the recipe. We have

This involves recognizing the recipe is infact a classic algorithm that is “Newton’s” method to compute square roots for , i.e. to solve . The algorithm involves the following steps:

Starts with some guess .

Compute the sequence of improved guesses:

In light of the above, it is easy to see that the recipe ultimately calls for cups of flour.

From the code below, we see that the recipe ultimately calls for cups.

```
def num_cups():
s, c = 2, 0
while True:
c = 0.5*(s + 3/s)
if abs(s-c) < 0.000001:
break
s = c
return c
print(num_cups())
```

Lately, Rushabh has been thinking about very large regular polygons — that is, a polygon all of whose sides and angles are congruent. His latest construction is a particular regular -gon, which has sides of length . Rushabh picks one of its longest diagonals, which connects two opposite vertices.

Now, this -gon has many diagonals, but only some are perpendicular to that first diagonal Rushabh picked. If you were to slice the polygon along all these perpendicular diagonals, you’d break the first diagonal into distinct pieces. Rushabh is curious — what is the product of the lengths of all these pieces?

Extra credit: Now suppose you have a regular -gon, each of whose sides has length . You pick a vertex and draw an altitude to the opposite side of the polygon. Again, you slice the polygon along all the perpendicular diagonals, breaking the altitude into distinct pieces. What’s the product of the lengths of all these pieces this time?

The segments of the problem are the vertical components of the polygon’s sides on its right half as shown in the diagram below; these lengths are , with where is even (here we make use of the fact that the polygon’s side rotates through as as it moves to the next side).

The product of the lengths of all these segments is given by

Let . The roots of this polynomial are the non-trivial -th roots of unity, so

Plugging in for yields .

Using the above result, we have

Therefore, when , the required product has the value .

For the case when is odd, we need to calculate the product

We make use of the result below. You can find the proof here.

We have,

When , the required product has the value .

Earlier this year, Dakota Jones used a crystal key to gain access to a hidden temple, deep in the Riddlerian Jungle. According to an ancient text, the crystal had exactly six edges, five of which were inch long. Also, the key was the largest such polyhedron (by volume) with these edge lengths.

However, after consulting an expert, Jones realized she had the wrong translation. Instead of definitively having five edges that were inch long, the crystal only needed to have four edges that were inch long. In other words, five edges could have been inch (or all six for that matter), but the crystal definitely had at least four edges that were inch long.

The translator confirmed that the key was indeed the largest such polyhedron (by volume) with these edge lengths.

Once again, Jones needs your help. Now what is the volume of the crystal key?

Given the distances between the vertices of a tetrahedron the volume can be computed using the :

where the subscripts represent the vertices and is the pairwise distance between them – i.e., the length of the edge connecting the two vertices.

If , , be the three edges that meet at a point, and the opposite edges. The volume of the tetrahedron is given by

where

In our case, let , we have

We need to find the value of and that maximizes

Setting the partial derivates and to zero, we get the equations

Therefore, attains the maximum value of when .

https://en.wikipedia.org/wiki/Tetrahedron

A polyhedron with edges has to be a tetrahedron. In this particular case, we have a tetrahedron where one face is an equilateral triangle of side length . The volume of tetrahedron (which is a triangular pyramid where the base is an equilateral triangle) is given by

The volume is maximized when the height is maximized i.e. when another edge of length is perpendicular to the base. Therefore the volume of the crystal key is

One morning, Phil was playing with my daughter, who loves to cut paper with her safety scissors. She especially likes cutting paper into “strips,” which are rectangular pieces of paper whose shorter sides are at most inch long.

Whenever Phil gives her a piece of standard printer paper ( inches by inches), she picks one of the four sides at random and then cuts a -inch wide strip parallel to that side. Next, she discards the strip and repeats the process, picking another side at random and cutting the strip. Eventually, she is left with nothing but strips.

On average, how many cuts will she make before she is left only with strips?

Extra credit: Instead of by -inch paper, what if the paper measures by inches? (And for a special case of this, what if the paper is square?)

From the simulation below, we see that the expected number of cuts before we are left only with strips is .

```
from random import random
def avg_num_cuts_mc(m, n, runs= 1000000):
sum_num_cuts = 0
for _ in range(runs):
num_cuts, cm, cn = 0, m, n
while cm > 1 and cn > 1:
r = random()
if r < 0.5:
cm -= 1
else:
cn -= 1
num_cuts += 1
sum_num_cuts += num_cuts
return sum_num_cuts/runs
print(f"Expected number of cuts is {avg_num_cuts_mc(11, 9)}")
```

We have the following recurrence relation for the expected number of cuts

with initial conditions .

Solving the recurrence relation using we see that the expected number of cuts is indeed .

```
def avg_num_cuts_rr(m, n):
C = [[0 for _ in range(n)] for _ in range(m)]
for i in range(1, m):
for j in range(1, n):
C[i][j] = 1 + 0.5*(C[i][j-1] + C[i-1][j])
return C[m-1][n-1]
print(f"Expected number of cuts is {avg_num_cuts_rr(11, 9)}")
```