AI. Shiny. 200ish words.
There has been a recent explosion in AI research.
Siri, introduced in 2011, was my first real interaction with an 'intelligent' computer, but it was terrible beyond the first 5 minutes. Siri's rigidity could go far beyond timers and weather. Looking back it makes sense why; Siri was (and still is) very hand coded, and every interaction forces itself into a very narrow set of predefined questions and answers. Boring.
In 2012, a bunch of nerds published AlexNet which many consider it a the landmark moment of AI. It labeled images with unparalleled accuracy using a 'neural network' which is a general descriptor for models that work similar to a brain. Their paper proved that this approach (deep learning) was the way of the future. the way of the future. the way of the future... Cool.
Since then, the fields of natural language processing and computer vision, previously thought separate, have now converged. Modern model architectures, such as the transformer (2017) and the U-Net(2015), maintain universal adoption. As a result, both fields share research papers, attend the same conferences, and speak the same lingo. Two once secular fields are now good chums, leading to more collaboration, insight, and all that yummy stuff. Wonderful.
Modern marvels of today (self-driving cars, chatGPT, midjourney) function on the 'deep learning' paradigm, completely revising our notions of a 'computer program'. Wow.
Comments
Post a Comment