Compiler research offers lots of challenges. We want our languages to be faster, more flexible, catching more errors all while being fully dynamic. In addition, Machine Learning researchers realize the need for sophisticated compiler support, raising completely new challenges. This is not exclusive to ML. The scientific computing community, with its need to run complex algorithms at top speed, has been pushing compiler design for a long time and is not slowing down. I will discuss how the combination of Python and C++ was highly successful in these areas, combining great performance with an outstanding usability and how this combination is fundamentally unfit for the challenges of the future. Google’s bold move to rewrite TensorFlow (Python, C++) in Swift is just one symptom, other symptoms include ML frameworks implementing their own IR and compilers on their quest to offer both flexibility and performance (TensorFlow-MLIR, Google-XLA, Pytorch-GLOW). After establishing where Python fails, I will explain how the Julia Language combines the best of Python and C++, in both compiler technology and language design.