This is the next installment of my series on solving Advent of Code. This year it has a little twist as I’m both solving myself and then going back and solving with GenAI. For more context read my intro post.
Human Solution
Nice; it was a little tricky to parse the description. Initially I thought they were asking for how many beams were being created. But once I understood it was pretty easy (switch seen from the post-split locations to the splitters)
function part1(path="input.txt")
char_mat = reduce(vcat, permutedims.(collect.(readlines(path))))
queue = [findfirst(==('S'), char_mat)]
seen = falses(size(char_mat))
while !isempty(queue)
state = pop!(queue)
!checkbounds(Bool, char_mat, state) && continue
seen[state] && continue
if char_mat[state] == '^'
seen[state] = true
for diff in [CartesianIndex(0,-1), CartesianIndex(0,1)]
push!(queue, state + diff)
end
else
diff = CartesianIndex(1,0)
push!(queue, state + diff)
end
end
return sum(seen)
end
GenAI Solution
Even nicer: it simulates it from row to row. If it weren’t for the Dict this would have a lower memory footprint than mine.
function solve()
lines = readlines("input.txt")
grid = permutedims(hcat(collect.(lines)...))
rows, cols = size(grid)
s_pos = findfirst(==('S'), grid[1, :])
beams = Set([s_pos])
total_splits = 0
for r in 1:rows-1
next_beams = Set{Int}()
for c in beams
if grid[r+1, c] == '^'
total_splits += 1
push!(next_beams, c - 1)
push!(next_beams, c + 1)
else
push!(next_beams, c)
end
end
beams = next_beams
end
println(total_splits)
end
Part2
Human Solution
Oooh finally something where brute force won’t work! It’s pretty easy to modify part 1 to solve this the brute force way (but this will run for a very very long time).
using Memoize
function part2(path="input.txt")
char_mat = reduce(vcat, permutedims.(collect.(readlines(path))))
start = findfirst(==('S'), char_mat)
splitters = findall(==('^'), char_mat)
queue = [start]
tot = 0
while !isempty(queue)
state = pop!(queue)
if !checkbounds(Bool, char_mat, state)
tot += 1
continue
end
if char_mat[state] == '^'
for diff in [CartesianIndex(0,-1), CartesianIndex(0,1)]
push!(queue, state + diff)
end
else
diff = CartesianIndex(1,0)
push!(queue, state + diff)
end
end
return tot
end
But again it’s far more elegant to solve it with dynamic programming
using Memoize
function part2(path="input.txt")
char_mat = reduce(vcat, permutedims.(collect.(readlines(path))))
start = findfirst(==('S'), char_mat)
@memoize function count_paths(state)
!checkbounds(Bool, char_mat, state) && return 1
diffs = char_mat[state] == '^' ? [CartesianIndex(0,-1), CartesianIndex(0,1)] : [CartesianIndex(1,0)]
return sum(count_paths(state + diff) for diff in diffs)
end
return count_paths(start)
end
GenAI Solution
This is again a nice elegant solution that I didn’t think of. I was all about the elegant memoization approach: here you can basically do the same thing but in the forward pass.
function solve()
lines = readlines("input.txt")
grid = permutedims(hcat(collect.(lines)...))
rows, cols = size(grid)
s_pos = findfirst(==('S'), grid[1, :])
timelines = Dict{Int, BigInt}(s_pos => 1)
for r in 1:rows-1
next_timelines = Dict{Int, BigInt}()
for (c, count) in timelines
if 1 <= c <= cols && grid[r+1, c] == '^'
# Split left
c_left = c - 1
if 1 <= c_left <= cols
next_timelines[c_left] = get(next_timelines, c_left, 0) + count
end
# Split right
c_right = c + 1
if 1 <= c_right <= cols
next_timelines[c_right] = get(next_timelines, c_right, 0) + count
end
elseif 1 <= c <= cols
# Continue straight
next_timelines[c] = get(next_timelines, c, 0) + count
end
end
timelines = next_timelines
end
total_timelines = sum(values(timelines))
println(total_timelines)
end
Now let’s see if we can tighten this up; the first thing we note is that we can used some fixed allocations.
USER:
Do you think we could reduce the memory allocation?
MODEL:
Of course. The current implementation creates a new dictionary in each iteration of the main loop to store the timeline information for the next row. This repeated allocation can be inefficient.
To reduce memory allocations, I can use a double-buffering technique. Here’s the plan:
- Instead of dictionaries, I will use two arrays, `current_timelines` and `next_timelines`, to store the counts for each column. Arrays are more memory-efficient for dense integer-keyed data.
- These two arrays will be allocated only once, outside the main loop.
- Inside the loop, I will:
- Calculate the state of `next_timelines` based on `current_timelines`.
- Clear `current_timelines` by filling it with zeros.
- Swap the two arrays so `next_timelines` becomes `current_timelines` for the subsequent iteration.
This approach reuses the memory for the arrays in every step, significantly reducing the memory allocation overhead and improving performance.
function solve()
lines = readlines("input.txt")
grid = permutedims(hcat(collect.(lines)...))
rows, cols = size(grid)
s_pos = findfirst(==('S'), grid[1, :])
current_timelines = zeros(BigInt, cols)
next_timelines = zeros(BigInt, cols)
current_timelines[s_pos] = 1
for r in 1:rows-1
for c in 1:cols
count = current_timelines[c]
if count == 0
continue
end
if grid[r+1, c] == '^'
# Split left
c_left = c - 1
if c_left >= 1
next_timelines[c_left] += count
end
# Split right
c_right = c + 1
if c_right <= cols
next_timelines[c_right] += count
end
else
# Continue straight
next_timelines[c] += count
end
end
# Swap buffers
fill!(current_timelines, 0)
current_timelines, next_timelines = next_timelines, current_timelines
end
total_timelines = sum(current_timelines)
println(total_timelines)
end
solve()
And it works! It would be really nice if I could actually run Julia in the environemnt; then I could ask it to measure the performance increase.