
Mazes, combinatorial optimization, hype-cycles.
Maze Runner
Imagine yourself trapped in a labyrinth, a maze of intricate paths and junctions, each leading to countless bifurcations. Picture the first junction, branching out into a bewildering array of a hundred paths. And with each path you choose, there are more junctions, each spawning hundreds more paths, some of which may even loop back upon themselves. Amidst this maze, countless dead-ends await, ready to thwart your progress.
But fear not, for there are exits: exits that lead to the very treasure you seek, brimming with infinite riches. Yet, how does one even begin such an exhausting quest? Ah, there are clues scattered along the way, like glimmering gold-coins, guiding you with their elusive allure.
So, here’s the method: at every junction, select a handful of paths at random. Venture as far as you can, discarding the dead-end paths and forging ahead on the ones that yield the greatest number of gold coins.
Now, here’s the counter-intuitive twist, even if you stumble upon a path that appears less promising, you still traverse it at times. By occasionally embracing inferior paths, you prevent yourself from getting stuck at one place and enhance your chances of discovering the elusive exit. Thus, at most junctions, you opt for the better path, the one adorned with more gold coins, except at select junctions, where you deliberately choose the seemingly inferior route. It’s a simple principle: explore different paths, embrace calculated risks, and find your way out.
A remarkably potent optimization algorithm known as simulated annealing is based on a similar strategy. Initially, you make a multitude of risky decisions. As your knowledge expands, you transition to increasingly judicious choices, driven by a desire for greater gain. This early phase is aptly called the heating up period, where you take risks and fail, followed by the subsequent cooldown. Whenever I encounter a vexing combinatorial optimization problem, a few lines of this code typically hold the key.
Hype Cycles
A few years back, during a prominent machine learning conference, I witnessed an esteemed academic luminary deliver their keynote address and morning presentations, only to retreat to the back-benches and immerse themselves in an old linear algebra book. As the conference coordinator, I approached them after a while and inquired if they were interested in attending any of the sessions.
“You should, but I won’t,” they replied. “Tell me later, in one word, what’s the single most crucial topic this year?”
“Auto-encoders,” I responded.
With a knowing smile, they remarked, “How many papers in total? Around 150, isn’t it? Within a year, perhaps only a dozen will endure, and in five years, merely two. Why bother with these ‘Hype Cycles’?”
That remark left an indelible impression on my naive graduate student self.
Hype cycles are akin to gold rushes, as discussed previously, periods when the system becomes ablaze with possibilities, enabling daring adventurers to take risks and uncover gold coins along the way, thereby fueling even greater hype. These cycles are commonplace in the realms of technology, startups, and academia. In the startup world, the previous gold rush may have been cryptocurrency, with the current one revolving around large language models.
So, if you possess an inquisitive nature and experience a slight “fear of missing out” during prevailing hype cycles, take solace and watch the game unfold, it’s never too late to learn or become a maze runner. In due time, the best ideas will emerge, standing resolute against the ever-shifting tides.