The “neat” and “scruffy” portraits have long been painted to describe viewpoints, styles of reasoning and methodologies in AI research. Essentially, the neats defend techniques based on first principles and grounded in mathematical rigor, while the scruffies advocate diversity within cognitive architectures, sometimes meant to be models of parts of the brain, sometimes just kludges or ad-hoc pieces of engineered code. The recent success of deep learning has revived the debate between these two approaches to AI; in this context, some natural questions arise. How can we characterize, and how can we classify, these positions given the history of AI? More importantly, what is the relevance of these positions for the future of AI? How should AI research be pursued from now on, neatly or scruffly? These are the questions we address in this paper, resorting to historical analysis and to recent research trends to articulate possible ways to allocate energy so as to take the field to maximal fruition.