Advent of Code 2025
(adventofcode.com)1213 points by vismit2000 3 days ago
1213 points by vismit2000 3 days ago
While is „only“ 12 days, are like 24 challenges. As no leaderboard is there, and I do it for fun, i will do it in 24 days.
That sounds healthy! But I would note that there's been interesting community discussions on reddit in past years, and I've gotten caught up in the "finish faster so I can go join the reddit discussion without spoilers". It turns out you can have amazing in-jokes about software puzzles and ascii art - but it also taught me in a very visceral way that even for "little" problems, building a visualizer (or making sure your data structures are easy-to-visualize) is startlingly helpful... also that it's nice to have people to commiserate with who got stuck in the same garden path/rathole that you did.
One way I've found is to break the problem down, and think about each step in reverse. So for example, what does the final stage want to do in order to achieve the result in a simple way? It might be that to get the final result it needs to sum numbers, but also needs to know their matching index in another array, plus some other identifier you got from an as-yet-unwritten previous step. This means your final stage needs a bunch of records that are (number, idx, sourceId), which means the step before needs to construct them - what information does it need to transform into that?
Write the simple code you want to write, and think about what makes the prior step possible in the easiest way and build your structures from there, filling in the gaps.
Same. I usually try to use it as the "real-world problem" I need for learning a new language. Is there anywhere that people have starter advice/ templates for various languages? I'd love to know
- install like this
- initialize a directory with this command
- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language
- here's the command for running tests
learnxinyminutes.com is a good resource that tries to cover the key syntax/paradigms for each language, I find it a helpful starting point to skim.
Since the start, each problem has 2 parts (2 "stars"). Part one sets up the problem, ensures you have parsed the input correctly, etc. After submitting the correct answer to that part, part 2 is revealed, which sometimes expands the proplem space, adds new limits, etc. Something that solves part 1 might be inadequate for part 2.
Yes, but nothing (theoretically) stops him from saying: "congratulations, you have solved part 1, wait until tomorrow for part 2".
I think either the author thinks people appreciate more the 2 stages challenge, than having one problem each day; or, more likely, the whole "infrastructure" is already prepared for 2 stages challenges per day. And changing that meant more work, eventually touching literally 10 y.o. code. The reason for the reduced days is exactly the lack of time. I assume he preferred to have 12 days, and modify as little as possible the old code. Having 1 stage per day maybe would have been possible at the expense of having less challenges, which again defeats the purpose.
The "only" 12 days might be disappointing (but totally understandable), however I won't mourn the global leaderboard which always felt pointless to me (even without the llm, the fact that it depends on what time you did solved problems really made it impractical for most people to actually compete). Private leaderboards with people on your timezone are much nicer.
The global leaderboard was a great way to find really crazy good people and solutions however - I picked through a couple of these guys solutions and learned a few things. One guy had even written his own special purpose language mainly to make AoC problems fast - he was of course a compilers guy.
Agreed! It’d be nice to surface that somehow. The subreddit is good but not everyone is there. I found a lot of interesting people and code in the folks who managed to finish challenges in like 4 minutes or whatever..
And this is how I know I am not a developer/programmer. I have no urge or interest in such event.
I wasn't casting logic. I'm not a developer and that when it comes to AoC I have no interest in. Nor being such.
Why post, then? No one cares about your lack of interest.
It always seemed odd to me that a persistent minority of HN readers seem to have no interest in recreational programming/technical problems solving and perpetually ask "why should I care?"
It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.
I wonder how this is the most straightforward way to know that?
It's all marketing, I can sell this to you and convert you.
Thing is it may have some interesting challenges, I too, wouldn't want to solve some insane string parsing problem with no interesting idea behind it. For today's problem, I did the naive version and it worked. The modular version created some issues with some corner cases.
There should be more events like AoC. Self-contained problems are very educational.
I _love_ the Advent of Code. I actually (selfishly) love that it's only 12 days this year, because by about half way, I'm struggling to find the time to sit down and do the fantastic problems because of all the holiday activities IRL.
Huge thanks to those involved!
Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
Yes: I'd argue that the timings actually work/worked better for Western Europe than the USA, I personally preferred doing the puzzle at 5am (UK) than the midnight equivalent, as I could finish before work (on a good day).
Nearly scratched a decent ranking once only, top 300 or so.
Either Russia (8am) or West Coast US (9pm) would be my preferred options.
Sadly it's 5am for me as I'm in the UK.
In 8 years I can say I've never once tried to be awake at 5am in order to do the puzzle. The one time I happened to still be awake at 5am during AoC I was quite spectacularly drunk so looking at AoC would have been utterly pointless.
Anything before 6.45am and I'm hopefully asleep. 7am isn't great as 7am-8am I'm usually trying to get my kid up, fed and out the door to go to school. Weekends are for not waking up at 7am if I don't need to.
9am or later and it messes with the working day too much.
Looking back at my submission times from 2017 onwards (I only found AoC in 2017 so did 2015/2016 retrospectively) I've only got two submissions under 02:xx:xx (e.g. 7am for me). Both were around 6.42am so I guess I was up a bit earlier that day (6.30am) and was waiting for my kid to wake up and managed to get part 1 done quickly.
My usual plan was to get my kid out of the door sometime between 7.30am and 8am and then work on AoC until I started work around 9am. If I hadn't finished it then I'd get a bit more time during my lunch hour and, if still not finished, find some time in the evening after work and family time.
Out of the 400 submissions from 2017-2024 inclusive I've only got 20 that are marked as ">24h" and many of these were days where I was out for the entire day with my wife/kid so I didn't get to even look at the problem until the next day. Only 4 of them are where I submitted part 1 within 24h but part 2 slipped beyond 24h.
Enormous understatement: I were unencumbered by wife/kids then my life would be quite a bit different.
Historical note: the original coding advent calendar was the Perl Advent Calendar, started in 2000 and still going.
https://perladvent.org/archives.html
Advent of Code is awesome also of course -- and was certainly inspired by it.
Opinion poll:
Python is extremely suitable for these kind of problems. C++ is also often used, especially by competitive programmers.
Which "non-mainstream" or even obscure languages are also well suited for AoC? Please list your weapon of choice and a short statement why it's well suited (not why you like it, why it's good for AoC).
My favourite "non-mainstream" languages are, depending on my mood at the time, either:
- Array languages such as K or Uiua. Why they're good for AoC: Great for showing off, no-one else can read your solution (including yourself a few days later), good for earlier days that might not feel as challenging
- Raw-dogging it by creating a Game Boy ROM in ASM (for the Game Boy's 'Z80-ish' Sharp LR35902). Why it's good for AoC: All of the above, you've got too much free time on your hands
Just kidding, I use Clojure or Python, and you can pry itertools from my cold, dead hands.
Perl is my starting point.
It has many of the required structures (hashes/maps, ad hoc structs, etc) and is great for knocking up a rough and ready prototype of something. It's also quick to write (but often unforgiving).
I can also produce a solution for pretty much every problem in AoC without needing to download a single separate Perl module.
On the negative side there are copious footguns available in Perl.
(Note that if I knew Python as well as I knew Perl I'd almost certainly use Python as a starting point.)
I also try and produce a Go and a C solution for each day too:
* The Go solution is generally a rewrite of the initial Perl solution but doing things "properly" and correcting a lot of the assumptions and hacks that I made in the Perl code. Plus some of those new fangled "test" things.
* The C solution is a useful reminder of how much "fun" things can be in a language that lacks built-in structures like hashes/maps, etc.
I like to use Haskell, because parser combinators usually make the input parsing aspect of the puzzles extremely straightforward. In addition, the focus of the language on laziness and recursion can lead to some very concise yet idiomatic solutions.
Example: find the first example for when this "game of life" variant has more than 1000 cells in the "alive" state.
Solution: generate infinite list of all states and iterate over them until you find one with >= 1000 alive cells.
let allStates = iterate nextState beginState # infinite list of consecutive solutions
let solution = head $ dropWhile (\currentState -> numAliveCells currentState < 1000) allStatesYes, there are some cool solutions using laziness that aren't immediately obvious. For example, in 2015 and 2024 there were problems involving circuits of gates that were elegantly solved using the Löb function:
I actually plan on doing this year in Gleam, because I did the last 5 years in Haskell and want to learn a new language this year. My solutions for last year are on github at https://github.com/WJWH/aoc2024 though, if you're interested.
Haskell values are immutable, so it creates a new state on each iteration. Since most of these "game of life" type problems need to touch every cell in the simulation multiple times anyway, building a new value is not really that much more expensive than mutating in place. The Haskell GC is heavily optimized for quickly allocating and collecting short-lived objects anyway.
But yeah, if you're looking to solve the puzzle in under a microsecond you probably want something like Rust or C and keep all the data in L1 cache like some people do. If solving it in under a millisecond is still good enough, Haskell is fine.
Fun fact about Game of Life is that the leading algorithm, HashLife[1], uses immutable data structures. It's quite well suited to functional languages, and was in fact originally implemented in Lisp by Bill Gosper.
I made my own, with a Haskell+Bash flavor and a REPL that reloads with each keystroke: https://www.youtube.com/watch?v=r99-nzGDapg
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
yep, https://github.com/lukechampine/slouch. Fair warning, it's some of the messiest code I've ever written (or at least, posted online). Hoping to clean it up a bit once the bytecode stuff is production-ready.
I think Ruby is the ideal language for AoC:
* The expressive syntax helps keep the solutions short.
* It has extensive standard library with tons of handy methods for AoC style problems: Enumerable#each_cons, Enumerable#each_slice, Array#transpose, Array#permutation, ...
* The bundled "prime" gem (for generating primes, checking primality, and prime factorization) comes in handy for at least a few of problems each year.
* The tools for parsing inputs and string manipulation are a bit more ergonomic than what you get even in Python: first class regular expression syntax, String#scan, String#[], Regexp::union, ...
* You can easily build your solution step-by-step by chaining method calls. I would typically start with `p File.readlines("input.txt")` and keep executing the script after adding each new method call so I can inspect the intermediate results.
I think Crystal, Nim, Julia and F# were my favorites from last year's AoC
I wrote a bit more about it here https://laszlo.nu/blog/advent-of-code-2024.html
AoC is a great opportunity for exploring languages!
C, because it makes every problem into a memory management problem, which is good for you in an 'eat your vegetables' sort of way. It's also the starting point for a lot of other programming languages and related things like HDLs, which is helpful to me.
I'm plodding my way through the 2015 challenge here: https://git.thomasballantine.com/thomasballantine/Advent_of_... , it's really sharpened me up on a number of points.
Go is strong. You get something where writing a solution doesn't take too much time, you get a type system, you can brute-force problems, and the usual mind-numbing boring data-manipulation handling fits well into the standard tools.
OCaml is strong too. Stellar type system, fast execution and sane semantics unlike like 99% of all programming languages. If you want to create elegant solutions to problems, it's a good language.
For both, I recommend coming prepared. Set up a scaffold and create a toolbox which matches the typical problems you see in AoC. There's bound to be a 2d grid among the problems, and you need an implementation. If it can handle out-of-bounds access gracefully, things are often much easier, and so on. You don't want to hammer the head against the wall not solving the problem, but solving parsing problems. Having a combinator-parser library already in the project will help, for instance.
> For both, I recommend coming prepared.
Any recommendations for Go? Traditionally I've gone for Python or Clojure with an 'only builtins or things I add myself' approach (e.g. no NetworkX), but I've been keen to try doing a year in Go however was a bit put off by the verbosity of the parsing and not wanting to get caught spending more time futzing with input lines and err.
Naturally later problems get more puzzle-heavy so the ratio of input-handling to puzzle-solving code changes, but it seemed a bit off putting for early days, and while I like a builtins-only approach it seems like the input handling would really benefit from a 'parse don't validate' type approach (goparsec?).
It's usually easy enough for Go you can just roll your own for the problems at hand. It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Once you have something which can "load \n seperated numbers into array/slice" you are mostly set for the first few days. Go has verbosity. You can't really get around that.
The key thing in typed languages are to cook up the right data structures. In something without a type system, you can just wing things and work with a mess of dictionaries and lists. But trying to do the same in a typed language is just going to be uphill as you don't have the tools to manipulate the mess.
Historically, the problems has had some inter-linkage. If you built something day 3, then it's often used day 4-6 as well. Hence, you can win by spending a bit more time on elegance at day 3, and that makes the work at day 4-6 easier.
Mind you, if you just want to LLM your way through, then this doesn't matter since generating the same piece of code every day is easier. But obviously, this won't scale.
> It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Yeah, this is essentially it for me. While it might not be a 'type-safe and correct regarding error handling' approach with Python, part of the interest of the AoC puzzles is the ability to approach them as 'almost pure' programs - no files except for puzzle input and output, no awkward areas like date time handling (usually), absolutely zero frameworks required.
> you can just wing things and work with a mess of dictionaries and lists.
Checks previous years type-hinted solutions with map[tuple[int, int], list[int]]
Yeah...
> but all of the AoC problems aren't parsing problems.
I'd say for the first ten years at least the first ten-ish days are 90% parsing and 10% solving ;) But yes, I agree, and maybe I'm worrying over a few extra visible err's in the code that I shouldn't be.
> if you just want to LLM your way through
Totally fair point if I constrain LLM usage to input handling and the things that I already know that I know how to do but don't want to type, although I've always quite liked being able to treat each day as an independent problem with no bootstrapping of any code, no 'custom AoC library', and just the minimal program required to solve the problem.
> Go is strong.
How do you parse the puzzle input into a data structure of your choice?
It was mind-boggling to see SQL solutions last year: https://news.ycombinator.com/item?id=42577736
I use python at work but code these in kotlin. The stdlib for lists is very comprehensive, and the syntax is sweet. So easy to make a chain of map, filter and some reduction or nice util (foldr, zipwithnext, windowed etc). Flows very well with my thought process, where in python I feel list comprehensions are the wrong order, lambdas are weak etc.
I write most as pure functional/immutable code unless a problem calls for speed. And with extension functions I've made over the years and a small library (like 2d vectors or grid utils) it's quite nice to work with. Like, if I have a 2D list (List<List<E>>), and my 2d vec, like a = IntVec(5,3), I can do myList[a] and get the element due to an operator overload extension on list-lists.
and with my own utils and extension functions added over years of competitive programming (like it's very fluent
AoC has been a highlight of the season for me since the beginning in 2015. I experimented with many languages over the years, zeroing in on Haskell, then Miranda as my language of choice. Finally, I decided to write my own language to do AoC, and created Admiran (based upon Miranda and other lazy, pure, functional languages) with its own self-hosted compiler and library of functional data structures that are useful in AoC puzzles:
https://github.com/taolson/Admiran https://github.com/taolson/advent-of-code
Clojure works really well for AOC.
A lot of the problems involve manipulating sets and maps, which Clojure makes really straightforward.
I'll second Clojure not just for the data structures but also because of the high level functions the standard library ships with.
Things like `partition`, `cycle` or `repeat` have come in so handy when working with segments of lists or the Conway's Game-of-Life type puzzles.
Is Rust still "non-mainstream"? Because it's extremely well suited for AoC. The ergonomics of a high-level language with the performance of C++.
I don't think it has a debugger but if you just want to try out something in Rust quickly: play.rust-lang.org
https://github.com/evcxr/evcxr (evcxr is short for "evaluation context and REPL")
I have used Raku (Perl 6) with good results.
Common Lisp. Using 'iterate' package almost feels like cheating.
I have done half a year in (noob level) Haskell long ago. But can't find the code any more.
Most mind blowing thing for me was looking at someone's solutions in APL!
OCaml. There's just enough in the standard library to cover what you need for everything, for any non-trivial parsing tasks, there's a parser generator and lexer generator bundled, and if you want to pull in extra support libraries so you're not looking to implement, say, a trie from scratch.
I've had a lot of fun using Nim for AOC for many years. Once you're familiar with the language and std lib, its almost as fast to write as python, but much faster (Nim compiles to C, which then gets compiled to your executable). This means that sometimes, if your solution isn't perfect in terms of algorithmic complexity, waiting a few minutes can still save you (waiting 5 mins for your slow Nim code is OK, waiting 5 hours for your slow Python isn't really, for me). Of course all problems have a solution that can run in seconds even in Python, but sometimes it's not the one I figure out first try.
Downsides: The debugging situation is pretty bad (hope you like printf debugging), smaller community means smaller package ecosystem and fewer reference solutions to look up if you're stuck or looking for interesting alternative ideas after solving a problem on your own, but there's still quality stuff out there.
Though personally I'm thinking of trying Go this year, just for fun and learning something new.
Edit: also a static type system can save you from a few stupid bugs that you then spend 15 minutes tracking down because you added a "15" to your list without converting it to an int first or something like that.
This question is really confusing to me because the point of AoC is the fun and experience of it
So.. a language that you're interested in or like?
Reminds me of "gamers will optimize the fun out of a game"
I'm pretty clojure-curious so might mess around with doing it in that
I’ve always used AoC as my jump-off point for new languages. I was thinking about using Gleam this year! I wish I had more profound reasons, but the pipeline syntax is intriguing and I just want to give it a whirl.
Elixir Livebook is my tool of choice for Advent of Code. The language is well-suited for the puzzles, I can write some Markdown if I need to record some algebra or my thought process, the notebook format serves as a REPL for instant code testing, and if the solution doesn't fit neatly into an executable form, I can write up my manual steps as well.
I've been using Elixir since day one, and it works pretty well :)
For me (and most of my friends/coworkers) the point of AoC was to write in some language that you always wanted to learn but never had the chance. The AoC problems tend to be excellent material for a crash course in a new PL because they cover a range of common programming tasks.
Historically good candidates are:
- Rust (despite it's popularity, I know a lot of devs who haven't had time to play with it).
- Haskell (though today I'd try Lean4)
- Racket/Common Lisp/Other scheme lisp you haven't tried
- Erlang/Elixir (probably my choice this year)
- Prolog
Especially for those langs that people typically dabble in but never get a change to write non-trivial software in (Haskell, Prolog, Racket) AoC is fantastic for really getting a feel for the language.
Yes, this year I'm going for Lean 4: https://github.com/ngrislain/lean-adventofcode-2025
It's a great language. It's dependent-types / theorem-proving-oriented type-system combined with AI assistants makes it the language of the future IMO.
If I remember correctly, one of the competitive programming experts from the global leaderboard made his own language, specifically tailored to help solve AoC problems:
Yes (or so I thought too!), but apparently no: https://blog.vero.site/post/noulith
(post title: "Designing a Programming Language to Speedrun Advent of Code", but starts off "The title is clickbait. I did not design and implement a programming language for the sole or even primary purpose of leaderboarding on Advent of Code. It just turned out that the programming language I was working on fit the task remarkably well.")
It's still very domain-oriented:
> I solve and write a lot of puzzlehunts, and I wanted a better programming language to use to search word lists for words satisfying unusual constraints, such as, “Find all ten-letter words that contain each of the letters A, B, and C exactly once and that have the ninth letter K.”1 I have a folder of ten-line scripts of this kind, mostly Python, and I thought there was surely a better way to do this.
I'll chose to remember it was designed for AoC :-D
I've done AoC on what I call "hard mode", where I do the solutions in a language I designed and implemented myself. It's not because the language is particularly suited to AoC in any particular way, but it gives me confidence that my language can be used to solve real problems.
Neon Language: https://neon-lang.dev/ Some previous AoC solutions: https://github.com/ghewgill/adventofcode
For some grid based problems, I think spreadsheets are very powerful and under-appreciated.
The spatial and functional problem solving makes it easy to reason about how a single cell is calculated. Then simply apply that logic to all cells to come up with the solution.
I used my homemade shell language last year, called elk shell. It worked surprisingly well, better than other languages I've tried, because unlike other shell languages it is just a regular general purpose scripting language with a standard library that can also run programs with the same syntax as function calls.
I think that whatever you know well is the best choice.
I usually do it with ruby with is well suite just like python, but last year I did it with Elixir.
I think it lends itself very well to the problem set, the language is very expressive, the standard library is extensive, you can solve most things functionally with no state at all. Yet, you can use global state for things like memoization without having to rewrite all your functions so that's nice too.
Not sure if Kotlin is non-mainstream, but being able to use the vast Java libraries choice and a much nicer syntax are great boons.
I’d say Clojure because it has great data manipulation utilities baked into the standard library.
Another vote for Haskell. It’s fun and the parsing bit is easy. I do struggle with some of the 2d map style questions which are simpler in a mutable 2d array in c++. It’s sometimes hard to write throwaway code in Haskell!
Haskell is my favorite for advent of code. Finally give me an opportunity to think in a pure functional way.
I respect the effort going into making Advent of Code but with the very heavy emphasis on string parsing, I'm not convinced it's a good way to learn most languages.
Most problems are 80%-90% massaging the input with a little data modeling which you might have to rethink for the second part and algorithms used to play a significant role only in the last few days.
That heavily favours languages which make manipulating string effortless and have very permissive data structures like Python dict or JS objects.
You are right. The exercises are heavy in one area. Still, for starting in a new language can be helpful: you have to do in/out with files. Data structures, and you will be using all flow control. So you will not be an ace, but can help to get started.
I know people who make some arbitrary extra restriction, like “no library at all” which can help to learn the basics of a language.
The downside I see is that suddenly you are solving algorithmic problems, which some times are bot trivial, and at the same time struggling with a new language.
That's a hard agree and a reason why anyone trying to learn Haskell, OCaml, or other language with minimal/"batteries depleted" stdlib will suffer.
Sure Haskell comes packaged with parser combinators, but a new user having to juggle immutability, IO and monads all at once at the same time will be almost certainly impossible.
I am very happy that we get the advent of code again this year, however I have read the FAQ for the first time, and I must admit I am not sure I understand the reasoning behind this:
> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.
The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.
Are you saying that we all have different inputs? I've never actually checked that, but I don't think it's true. My colleagues have gotten stuck in the same places and have mentioned aspects of puzzles and input characteristics and never spoken past each other. I feel like if we had different inputs we'd have noticed by now.
It depends on the individual problem, some have a smaller problem space than others so unique inputs would be tricky for everyone.
But there are enough possible inputs that most people shouldn't come across anyone else with exactly the same input.
Part of the reason why AoC is so time consuming for Eric is that not only does he design the puzzles, he also generates the inputs programmatically, which he then feeds through his own solver(s) to ensure correctness. There is a team of beta testers that work for months ahead of the contest to ensure things go smoothly.
(The adventofcode subreddit has a lot more info on this.)
He puts together multiple inputs for each day, but they do repeat over users. There's a chance you and your colleagues have the same inputs.
He's also described, over the years, his process of making the inputs. Related to your comment, he tries to make sure that there are no features of some inputs that make the problem especially hard or easy compared to the other inputs. Look at some of the math ones, a few tricks work most of the time (but not every time). Let's say after some processing you get three numbers and the solution is their LCM, that will probably be true of every input, not just coincidental, even if it's not an inherent property of the problem itself.
You do get different inputs, but they largely share characteristics so good solutions should always work and naive ones should consistently fail.
There has been the odd puzzle where some inputs have allowed simpler solutions than others, but those have stood out.
I don't know how much they "stand out" because their frequency makes it so that the optimal global leaderboard strat is often to just try something dumb and see if you win input roulette.
if we just look at the last three puzzles: day 23 last year, for example, admitted the greedy solution but only for some inputs. greedy clearly shouldn't work (shuffling the vertices in a file that admits it causes it to fail).
There are several sets of inputs, and it picks one for you.
I use git-crypt to encrypt the inputs in my public repo https://www.agwa.name/projects/git-crypt/ :)
This is not surprising at all, to me. Just commit the example input and write your test cases against that. In a nicely structured solution, this works beautifully with example style tests, like python or rust doctests, or even running jsdoc @example stanzas as tests with e.g. the @linus/testy module.
> Just commit the example input
The example input(s) is part of the "text", and so committing it is also not allowed. I guess I could craft my own example inputs and commit those, but that exceed the level of effort I am willing to expend trying to publish repository no one will likely ever read. :)
The inputs are part of the validation that you did the question, so they're kind of a secret.
I make my code public, and keep my inputs in a private submodule.
I had never heard of this before I saw something announcing this years adventure. It looked interesting so I gave it a try, doing 2024. I had a blast. In concept, its very similar to the Euler Project but oriented more towards programming rather than being heavily mathematical. Like Euler, the first part is typically trivial while part 2 can put the hammer down and make you think to devise an approach that can arrive at a solution in milliseconds rather than the death of the universe.
I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.
Advent of code is such a fantastic event. I am honestly glad it's 12 days this year, primarily because I would only ever get to day 13 or 14 before it would take me an entire day to finish the puzzles! This would be my fourth year doing AoC. Looking forward to it :)
I plan on doing this year in C++ because I have never worked with it and AoC is always a good excuse to learn a new language. My college exams just got over, so I have a ton of free time.
Previous attempts:
- in Lua https://github.com/Aadv1k/AdventOfLua2021
- in C https://github.com/Aadv1k/AdventOfC2022
- in Go https://github.com/Aadv1k/AdventOfGo2023
really hope I can get all the stars this time...Cheers, and Merry Cristmas!
BTW the page mentions Alternate Styles, which is an obscure feature in firefox (View -> Page Styles). If you try it out, you will probably run into [0] and not be able to reset the style. The workaround is to open the page in a different tab, which will go back to the default style.
I am glad to have found this thread bc I had never heard of AoC before. I ended up running through Day 1 just in time to catch Day 2 at midnight and so did that one too. I am definitely looking forward to the next 10 days now.
Having only started using python in the last few months (and always alongside agents to help me learn the new language) I am enjoying this opportunity/invitation to challenge myself to write the code from scratch, because it is helping me reinforce my understanding of the fundamentals of a language that is new to me.
On the one hand I do love how (in general nowadays) I can tell an agent to “implement a grammar parser for this example input stream” yet on the other hand, it’s too easy to just use the code without bothering to understand how it works. Likewise, it is so pleasantly easy these days to paste an error message into a chat window instead of figuring out for myself what it means / how to fix it. I love being able to get help (from agents) with that kind of stuff, but I also love being able to do it on my own.
Thank you to the folks who organize this event, for giving me that extra motivation to tie a ribbon around my understanding of various topics enough to be able to write python without help from agents or reference guides.
I’d also like to add that having never participated when the global leaderboard existed, I cannot compare this to that, other than to say that I appreciate how this way encourages me to come up with “personal challenges” like not using an IDE with autocomplete, or not looking up any info from reference sources, or not including any libraries beyond the core language functionality.
the adventofcode subbreddit is pretty cool to visit once you've finished. i learned a new approach (for puzzle 2/2) that i wouldn't have, as my first approach was 'good enough' and i would've left it at that.
I find it interesting how many sponsors run their own "advent of <x>". So far I've seen "cloud", "FPGA", and a "cyber security" one in the sponsors pages (although that last one is one I remember from last year).
I'm also surprised there are a few Dutch language sponsors. Do these show up for everyone or is there some kind of region filtering applied to the sponsors shown?
I've done all the years and all the problems.
The part I enjoy the most is after figuring out a solution for myself is seeing what others did on Reddit or among a small group of friends who also does it. We often have slightly different solutions or realize one of our solutions worked "by accident" ignoring some side case that didn't appear in our particular input. That's really the fun of it imho.
I love advent of code, and I look forward to it every year!
I've never stressed out about the leaderboard. Ive always taken it as an opportunity to learn a new language, or brush up on my skills.
In my day-to-day job, I rarely need to bootstrap a project from scratch, implement a depth first search of a graph, or experiment with new language features.
It's for reasons like these that I look forward to this every year. For me it's a great chance to sharpen the tools in my toolbox.
Some part of me would love a job that was effectively solving AoC type problems all the time, but then I'd probably burn out pretty quickly if that's all I ever had to do.
Sometimes it's nice to have a break by writing a load of error handling, system architecture documentation, test cases, etc.
> For me it's a great chance to sharpen the tools in my toolbox.
That's a good way of putting it.
My way of taking it a step further and honing my AoC solutions is to make them more efficient whilst ensuring they are still easy to follow, and to make sure they work on as many different inputs as possible (to ensure I'm not solving a specific instance based on my personal input). I keep improving and chipping away at the previous years problems in the 11 months between Decembers.
> You don't need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
On sorta the same topic:
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
I don't know if that made any sense.
This is also true of a lot of other disciplines. I’ve been learning filmmaking lately (and editing, colour science, etc). There’s functionally infinite beginner friendly videos online on anything you can imagine. But very little content that slowly teaches the fundamentals, or presents intermediate skills. It’s all “Here’s 5 pieces of gear you need!” “One trick that will make your lighting better”. But that’s mostly it. There’s almost no intermediate stuff. No 3 hour videos explaining in detail how to set up an interview properly. Stuff like that.
I've found the best route at that point is just... copying people who are really good. For my interest (3d modeling) if you want voice-over and directions, those are all pretty basic, but if you want to see how someone approaches a large, complex object, I will literally watch a timelapse of someone doing it and scrub the video in increments to see each modifier/action they took. It's slow but that's also how I built some intuition and muscle memory. That's just the way...
Makes sense that that's the case: there's usually a limited amount of beginner's knowledge, and then you get to the medium level by arbitrary combinations of that beginner's knowledge, of which there's an exponential number, making it less likely that someone has produced something about that specific combination. Then at the expert level, people can get real deep into some obscure nitty-gritty detail, and other experts will be able to generalise from that by themselves.
It's one of the worst parts of being self taught, beginner level stuff has a large interest base because everyone can get into it.
Advanced level stuff usually gets recommended directly by experts or will be interesting to beginners too as a way of seeing the high level.
Mid level stuff doesn't have that wide appeal, the freshness in the mind of the experts, or the ease of getting into, so it's not usually worth it for creators if the main metric is reach/interest
Structured (taught) learning is better in this regard, it at least gives you structure to cling on to at the mid level
Yes, and it's hard to point to reference material to newcomers. Hey, yeah that's actually a classic problem, let me show you some book about this... oh there's none. Maybe I should start creating them, but that is of course hard.
But also, the middle ground is often just years of practice.
CodeWars has a nice Kata grading system that features many intermediate level problems.
Someone else in the thread lamented the problems as "too easy" and I wondered what world I was living in.
Realize in anything, there are people who are much better than even the very best. The people doing official collegiate level competitive programming would find AoC problems pretty easy.
>The people doing official collegiate level competitive programming would find AoC problems pretty easy.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
The group of people for which the problems are "too easy" is probably quite small.
According to Eric last year (https://www.reddit.com/r/adventofcode/comments/1hly9dw/2024_...) there were 559 people that had obtained all 500 stars. I'm happy to be one of them.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Sorry to focus on just one aspect of your (excellent) post, but do you have recommendations for reading up on A*/SAT beyond wikipedia? I'm mostly self-taught (did about a minor's worth of post-bacc comp sci after getting a chemistry degree) and those just hasn't come up much, e.g. I don't see A* mentioned at a first glance through CLRS and only in passing in Skiena's algorithms book. Thank you!
Not sure. I covered them during my Comp Sci degree in the mid/late 90s. I'm probably not even implementing them properly but whatever I do implement tends to work.
Just checked my copy of TAOCP (Vol 3 - Sorting and Searching) and it doesn't mention A* or SAT.
Ref: https://en.wikipedia.org/wiki/The_Art_of_Computer_Programmin...
A quick google shows that the newer volumes (Volume 4 fascicles 6 and 7) seem to cover SAT. Links to downloads are on the Wikipedia page above.
Maybe the planned 4C Chapter 7 "Combinatorial searching (continued)" might cover A* searching. Ironically googling "A* search" is tricky.
Hopefully someone else will chip in with a better reference that is somewhere in the middle of Wikipedia's brevity and TAOCP's depth.
Yeah, getting 250 or so stars is going to be straightforward, something most programmers with a couple of years of experience can probably manage. Then another 200 or so require some more specialized know-how (maybe some basic experience with parsers or making a simple virtual machine or recognizing a topology sort situation). Then probably the last 50 require something a bit more unusual. For me, I definitely have some trouble with any of the problems where modular inverses show up.
It's just bluffing, lying. People lie to make others think they're hot shit. It's like the guy in school who gets straight A's and says he never studies. Yeah I'll bet.
They... sort of are though? A year or two ago I just waited until the very last problem, which was min-cut. Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time. There are algorithms that don't even require all the high-falutin graph theory.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
Got to agree. I'm even surprised at just how little progress many of my friends and ex-colleagues over the years make given that they hold down reasonable developer jobs.
My experience has been "little progress" is related to the fact that, while AoC is insanely fun, it always occurs during a time of year when I have the least free time.
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
In order to complete AoC you need more than just the ability to write code and solve problems. You need to find abstract problem-solving motivating. A lot of people don't see the point in competing for social capital (internet points) or expending time and energy on problems that won't live on after they've completed them.
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
Yeah, time is almost certainly the thing that kills most people's progress but that's not the root cause.
I think it comes down to experience, exposure to problems, and the ability to recognise what the problem boils down to.
A colleague who is an all round better coder than me might spend 4 hours bashing away trying to solve a problem that I might be able to look at and quickly recongise it is isomorphic to a specific classic Comp Sci or Maths problem and know exactly how best to attack it, saving me a huge amount of time.
Spoiler alert: Take the "Slam Shuffle" in 2019 Day 22 (https://adventofcode.com/2019/day/22). I was lucky that I quickly recognised that each of the actions could be represented as '( a*n + b ) mod noscards' (with a and b specific to the action) and therefore any two actions like this can be combined into the same form. The optimal solution follows relatively simply from this.
Doing all of the previous years means there's not much new ground although Eric always manages to find something each year.
There have also been some absolutely amazing inventions along the way. The IntCode Breakout game (2019) and the adventure game (can't remember the year) both stick in my mind as amazing constructions.
That's exactly why I don't do more than I do. I do some of the easy ones and it's fun. Then it gets a little harder and I start wondering how much time I want to put into this.
And then something shiny and fun comes along during a problem that I'm having trouble with, and I just never come back.
It's hard for most people to focus on a single thing for a long period of time. Motivation tends to come and go. I started the 2024 solutions in 2025, without the pressure and got to the end this way (not without help though TBH). Secondary motivation can help, like being bored or wanting to learn another programming language.
I've never tried AoC prior but with other complex challenges I've tried without much research, there comes a point where it just makes more sense to start doing something on the backlog at home or a more specific challenge related to what I want to improve on.
Because like 80% of AoC problems require deep Computer science background and deeply specific algorithms almost nobody is using in their day to day work.
It's totally true. I was doing Advent of Code before I had any training or work in programming at all, and a lot of it can be done with just thinking through the problem logically and using basic problem solving. If you can reason a word problem into what it's asking, then break it down into steps, you're 90% of the way there.
Comparing previous years, they're exactly what I'd expect, to be honest. Only people serious about completion will...well...complete it. Even if they do not know any code, if you pick something well-documented like Python or whatever, it should not be a tremendous challenge so long as you have the drive to finish the event. Code isn't exactly magic, though it does require some problem-solving and dedication. Since this is a self-paced event that does not offer any sort of immediate reward for completion, most people will drop out due to limited bandwidth needing to be devoted to everything else in their lives. That versus, say, a college course where you paid to be there and the grade counts toward your degree; there's simply more at stake when it comes to completing the course.
But, speaking to the original question as to the number of newbies that go all the way, I'd say one cannot expect to increase their skills in anything if one sticks in their comfort zone. It should be hard, and as a newbie who participated in previous years, I can confirm it often is. But I learned new things every time I did it, even if I did not finish.
There is a minority of people who can outsmart everyone without a degree.
mh, maybe it's cheating because it's still a STEM degree but I have a PhD in physics without any real computer science courses (obviously had computational physics courses etc. though) and I managed to 100% solve quite a few years without too much trouble. (though far away from the global leaderboard and with the last few days always taking several hours to solve)
I have a EE background not CS and haven't had too much trouble the last few years. I'm not aiming to be on the global leader board though. I think that with good problem solving skill, you should be able to push through the first 10 days most years. Some years were more front loaded though.
Agreed. I have a CS background and years of experience but I don't get very far with these. At some point it becomes a very large time commitment as well which I don't have
Agreed. There is no "beginner" or amateur programmer who could complete even part of a single Advent of Code problem.
I disagree, the odd few are quite simple and can be done with pencil and paper.
https://adventofcode.com/2020/day/1 for example. It's not hard to do part 1 by hand.
You need two numbers from the input list (of 200 numbers) that add to 2020.
For each number n in the list you just have to check if (2020-n) is in the list.
A quick visual scan showed my input only had 9 numbers that were less than 1010, so I'd only have to consider 9 candidate numbers.
It would also be trivial for anyone who can do relatively simple things with a spreadsheet.
In general, the problems require less background knowledge than other coding puzzles. They're not always accessible without knowing a particular algorithm, but they're more 'can you think through a problem' than 'have you done this module'.
That's not the same as saying they're easy, but it's a different kind of barrier, and (in my opinion) more a test of 'can you think?' than 'did you do a CS degree?'
> you won't get stuck because of a word you don't understand or a concept you've never heard of
I very much disagree here. To make any sort of progress in AoC, in my experience, you need at least:
- awareness of graphs and how to traverse them
- some knowledge of a pathfinding algorithm
- an understanding of memoisation and how it can be applied to make deeply recursive computations feasible
Those types of puzzle come up a lot, and it’s not anything close to what I’d expect someone with “just a little programming knowledge” to have.
Someone with just a little programming knowledge is probably good with branches and loops, some rudimentary OOP, and maybe knows when to use a list vs a map. They’re not gonna know much about other data structures or algorithms.
They could learn them on the go of course, but then that’s why I don’t think basic coding knowledge is enough.
Other Advent Calendars for developers https://github.com/vimode/Advent-Calendars-For-Developers
I am still updating it for this year, so please feel free to submit a PR or share some here.
I'm actually pleasantly surprised to see a 2025 edition, last year being the 10th anniversary and the LLM situation with the leaderboard were solid indications that it would have been a great time to wrap it up and let somebody else carry the torch.
It's only going to be 12 problems rather than 24 this year and there isn't going to be a gloabl leaderboard, but I'm still glad we get to take part in this fun Christmas season tradition, and I'm thankful for all those who put in their free time so that we can get to enjoy the problems. It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect, I've always just enjoyed the puzzles, so as far as I'm concerned nothing was really lost.
>It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect
Is this an unpopular stance? Out of a dozen people I know that did/do AoC every year, only one was trying to compete. Everyone else did it for fun, to learn new languages or concepts, to practice coding, etc.
Maybe it helps that, because of timezones, in Europe you need to be really dedicated to play for a win.
>Is this an unpopular stance?
No, it's not. At most 200 people could end up on the global leaderboard, and there are tens of thousands of people who participate most days (though it drops off by the end, it's over 100k reliably for the first day). The vast majority of participants are not there for the leaderboard. If you care about being competing, there are always private leaderboards.
I'm also in a few local leaderboards, but I'm not "really" competing, it's more of a fun group thing.
Premises:
(i) I love Advent of Code and I'm grateful for its continuing existence in whatever form its creators feel like it's best for themselves and the community;
(ii) none of what follows is a request, let alone a demand, for anything to change;
(iii) what follows is just the opinion of some random guy on the Internet.
I have a lot of experience with competitions (although more on the math side than on the programming side), and I've been involved essentially since I was in high school, as a contestant, coach, problem writer, organizer, moving tables, etc. In my opinion Advent of Code simply isn't a good competition:
- You need to be available for many days in a row for 15 minutes at a very specific time.
- The problems are too easy.
- There is no time/memory check: you can write ooga-booga code and still pass.
- Some problems require weird parsing.
- Some problems are pure implementation challenges.
- The AoC guy loves recursive descent parsers way too much.
- A lot of problems are underspecified (you can make assumptions not in the problem statement).
- Some problems require manual input inspection.
To reiterate once again: I am not saying that any of this needs to change. Many of the things that make Advent of Code a bad competition are what make it an excellent, fun, memorable "Christmas group thing". Coming back every day creates community and gives people time to discuss the problems. Problems being easy and not requiring specific time complexities to be accepted make the event accessible. Problems not being straight algorithmic challenges add welcome variety.
I like doing competitions but Advent of Code has always felt more like a cozy problem solving festival, I never cared too much for the competitive aspect, local or global.
There are definitely some problems that have an indirect time/memory check, in that if you don't have a right-enough algorithm, your program will never finish.
I too like the simple nature. If you care about highly performant code, you can always challenge yourself (I got into measuring timing in the second season I participated). Personally I prefer a world like this. Not everyone should have to compete on every detail (I know you stated that your points aren’t demands, I’m just pointing out my own worldview). For any given thing, there will naturally be people that are OK with “good enough”, and people who are interested to take it as far as they can. It’s nice that we can all still participate in this.
One could probably build a separate service that provides a leaderboard for solution runtimes.
I agree that it’s more of a cozy activity than a hardcore competition, that’s what I appreciate about it most.
> The AoC guy loves recursive descent parsers way too much.
LOL!!
I agreed with a lot of what you wrote, but also a lot of us strive for beautiful solutions regardless of time/memory bounds.
In fact, I’m (kind of) tired of leetcode flagging me for one ultra special worst-case scenario. I enjoy writing something that looks good and enjoying the success.
(Not that it’s bad to find out I missed an optimization in the implementation, but… it feels like a lot of details sometimes.)
> The problems are too easy.
The problems are pretty difficult in my book (I never make it past day 3 or so). So I definitely would hope they never increase the difficulty.
I did a post [0] about this last year, and vanilla LLMs didn’t do nearly as well as I’d expected on advent of code, though I’d be curious to try this again with Claude code and codex
[0] https://www.jerpint.io/blog/2024-12-30-advent-of-code-llms/
LLMs, and especially coding focused models, have come a very long way in the past year.
The difference when working on larger tasks that require reasoning is night and day.
In theory it would be very interesting to go back and retry the 2024 tasks, but those will likely have ended up in the training data by now...
> LLMs, and especially coding focused models, have come a very long way in the past year.
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!)
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
Last year, I saw LLMs do well on the first week and accuracy drop off after that.
But as others have said, it’s a night and day difference now, particularly with code execution.
Current frontier agents can one shot solve all 2024 AoC puzzles, just by pasting in the puzzle description and the input data.
From watching them work, they read the spec, write the code, run it on the examples, refine the code until it passes, and so on.
But we can’t tell whether the puzzle solutions are in the training data.
I’m looking forward to seeing how well current agents perform on 2025’s puzzles.
They obviously have the puzzles in the training data, why are you acting like this is uncertain?
It's really disheartening that the culture has changed so much someone would think doing AoC puzzles just for the fun of it is an unpopular stance :(
Doing things for the fun of it, for curiosity's sake, for the thrill of solving a fun problem - that's very much alive, don't worry!
Eliminating the leaderboard might help. By measuring it as a race, it becomes a race, and now the goal is the metric.
Maybe just have a cool advent calendar thingy like a digital tree that gains an ornament for each day you complete. Each ornament can be themed for each puzzle.
Of course I hope it goes without saying that the creator(s) can do it however they want and we’re nothing but richer for it existing.
That 'digital tree' idea is similar to how AoC has always worked. There's a theme-appropriate ASCII graphic on the problem page that gains color and effects as you complete problems. It's not always a tree, but it was in 2015 (the first year), and in several other years at least one tree is visible. https://adventofcode.com/2015
Exactly. I have always taken AoC as fun and time to learn. But there is so much going on during December and I do not enjoy doing more than one puzzle a day (it feels like hard work instead of fun). I usually spend time on weekends with kids and family and I am not willing to solve more puzzles during week days so I am falling behind all the time. My plan was always to finish last year puzzles to enjoy more interesting ones but it always felt wrong. So I hope I will have time to finish everything this year :-) But I feel pain from people with enough free time to go full on. I would love to be one of them but there is so much going on everywhere that I have to split my time. Sorry programming world and especially computers :-D
This will be my first one! My primary languages are Typescript and Java. Looking forward to it!
well, I tried to do the first day and I think it's an indictment of my own capabilities that I spent most of my day on the second part and still failed to get the correct result. That sort of programming is not something I've been doing at my current position, but still as programmer that has been working for a decade, that smarts a little.
I want to try doing it in assembly using fasm this year.
Could either be really recreational and relaxing.. or painful and annoying.
Though I don't care even if it takes me all of next year, it's all in order to learn :)
Small anecdote:
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Man, those people using LLMs in competitive programming ... where's the fun in that? I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
I’m a very casual gamer but even I run into obvious cheaters in any popular online game all the time.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
Yeah. I was happy to see this called out in their /about
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
> I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
0: https://www.theatlantic.com/technology/archive/2025/09/high-...
> high school debate used to be an extracurricular thing students could do for fun.
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
It sucks that the fun is being sucked out of debate, but I guess a silver lining is that the abuse of these tactics helps everyone understand that winning debates isn't about being correct, it's about being a good debater. And a similar principle can be applied to the application of law and public policy as well.
Yeah, it's like bringing a ~bike~ motorcycle to your marathon. But if you can get away with it, there will always be people doing it.
Imagine the shitshow that gaming would be without any kind of anti-cheat measures, and that's the state of competitive programming.
Why is that strange? Competitive programming, as the name suggests, is about competing. If the rules allow that, not using LLM is actually more like running tour de France.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
I'm a bit surprised you can honestly believe that a competition of humans isn't somehow different if allowed to use solution-generators. Like using a calculator in an arithmetic competition. Really?
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
It's a different kind of fun. Just like doing math problems on paper can be fun, or writing code to do the math can be fun, or getting AI to write the code to do the math can be fun.
They're just different types of fun. The problem is if one type of fun is ruined by another.
It can be a matter of values from your upbringing or immediate environment. There are plenty of places where they value the results, not the journey, and they think that people who avoid cheating are chumps. Think about that: you are in a situation where you just want to do things for fun but everyone around you will disrespect you for not taking the easy way out.
Weirdly I feel lot more accepting of LLMs in this type of environment than in making actual products. Point is doing things fast and correct enough. So in someways LLM is just one more tool.
With products I want actual correctness. And not something thrown away.
Given what I understand about the nature of competitive programming competitions, using an LLM seems kind of like using a calculator in an arithmetic competition (if such a thing existed) or a dictionary in a spelling bee.
I feel like it’s more like using an electronic dictionary in a spelling bee that already allowed you to use a paper dictionary. All it really does is demonstrate that the format isn’t suited to be a competition in the first place.
Which is why I think it’s great they dropped the competitive part and have just made it an advent calendar. Much better that way.
These contests are about memorizing common patterns and banging out code quickly. Outsourcing that to an LLM defeats the point. You can say it's a stupid contest format, and that's fine.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
When I did competitions like these at uni (~10-15 years ago), we all used some thin-clients in the computer lab where the only webpages one could access were those allowed by the competition (mainly the submission portal). And then some admin/organizers would feed us and make sure people didn't cheat. Maybe we need to get back to that setup, heh.
Serious in-person competitions like ICPC are still effective against cheating. The first phase happens in a limited number of venues and the computers run a custom OS without internet access. There are many people watching so competitors don't user their phones, etc.
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s contest, however, we will not be releasing official results. The reason for this is the significant number of students who violated the CCC Rules. In particular, it is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help. As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Available here: [PDF] https://cemc.uwaterloo.ca/sites/default/files/documents/2025...
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
In 1997, Deep Blue beat Gary Kasparov, the world chess champion. Today, chess grandmasters stand no chance against Stockfish, a chess engine that can run on a cheap phone. Yet chess remains super popular and competitive today, and while there are occasional scandals, cheating seems to be mostly prevented.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
Is it necesary to "log in" with a so-called "tech" company such as Github (Microsoft), Reddit, Google, etc., in order to see the puzzles
For those who think this is a typo, uiua [1] (pronounced "wee-wuh") is a stack-based array programming language.
I solved a few problems with it last year, and it is amazing how compact the solutions are. It also messes with your head, and the community surrounding it is interesting. Highly recommended.
Related:
Uiua – A stack-based array programming language - https://news.ycombinator.com/item?id=42590483 - Jan 2025 (6 comments)
Uiua: A minimal stack-based, array-based language - https://news.ycombinator.com/item?id=37673127 - Sept 2023 (104 comments)
Finally that time of year again! I've been looking forward to this for a long time. I usually drop off about halfway anyways (finished day 13, 14 and 13 the previous 3 years), as that's when December gets too busy for me to enjoy it properly, so I personally don't mind the reduction in problems at all, really. I'm just happy we still have great puzzles to look forward to.
I am! I love the design of Gleam in theory but keep bouncing off it so I’m interested in seeing if AoC will help me give it a fair shake.
It is quite odd to call this advent when it ends halfway into the month rather than on Christmas. But I will have fun doing them either way
It may have made more sense to start on Christmas Day, matching the Twelve Days of Christmas [1].
No, the puzzles are every day from the 1st to the 12th inclusive.
From https://adventofcode.com/2025/about:
" Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December). "
Excited to see AOC back and I think it was a solid idea to get rid of the global leaderboard.
We (Depot) are sponsoring this year and have a private leaderboard [0]. We’re donating $1k/each for the top five finishers to a charity of their choice.
Isn't a publicly advertised private leaderboard - especially with cash prizes - against the new guidance? Certainly the spirit of the guidance.
>What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. *Please don't use this feature or data to create a "new" global leaderboard.*)
i don't think it should be a charity of their choice. i think it should have to be one of the top 5 most reputable charities in the world, like doctors without borders or salvation army.
I usually use multiple languages. Ocaml anf Go are always a pick. This year i think i want to try Gleam, and Haxe too.
A good opportunity to learn a new programming language: https://news.ycombinator.com/item?id=46105849
I've been looking forward to this!
It's kotlin and shik for me this year, probably a bit of both. And no stupid competitions, AoC should be fun.
There's a relevant FAQ with a solution for you:
The "etc" is pretty important here. You can log in using Reddit, and you can create a random throwaway Reddit account without filling in any other details (no email address or phone number required).
I believe they no longer allow new accounts without an email address.
It used to be that reddit had a user creation screen that looked like you needed to input an email address, but you could actually just click "Next" to skip it.
The last time I had cause to make a reddit account, they no longer allowed this.
You're right, it looks like these days you have to fill in an email address. Though any random thing can be filled in. They will send a verification code, and on the next screen you can either fill it in or click Skip in top right. Then in the preferences, it can be removed from the account. A bit annoying for sure, but still not valid email address needed.
But it is true that at any time they could make using an email address or phone number mandatory, and then creating an Advent of Code account will be gated behind that.
Having done my own auth I get why they do it this way. LLMs are already a massive problem with AoC, I imagine an anonymous endpoint to validate solutions would be even worse.
Having done auth myself, I can also understand why auth is being externalised like this. The site was flooded with bots and scrapers long before LLMs gained relevance and adding all the CAPTCHAs and responding to the "why are you blocking my shady CGNAT ISP when I'm one of the good ones" complaints is just not worth it. Let some company with the right expertise deal with all of that bullshit.
I'd wish the site would have more login options, though. It's a tough nut to crack; pick a small, independent oauth login service not under control of a bit tech company and you're basically DDOSing their account creation page for all of December. Pick a big tech company and you're probably not gaining any new users. You can't do decentralized auth because then you're just doing authentication DDOS with extra steps.
If I didn't have a github account, I'd probably go with a throwaway reddit account to take part. Reddit doesn't really do the same type of tracking Twitter tries to do and it's probably the least privacy invasive of the bunch.
I’m probably going to use rescript. Though I may do Gleam or Roc.
If you're feeling adventurous and would like to try Roc's new compiler, I put together a quick tutorial for it!
https://gist.github.com/rtfeldman/f46bcbfe5132d62c4095dfa687...
How about something like
while [1]; do kill -9 $((rnd * 100000)); sleep 5; end
Probably needs some external tool for the rnd function.On a serious note, I just saw this: https://linuxupskillchallenge.org
Advent of Code is one of the highlights of December for me.
It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.
If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.