Recent controversies about AI enable a diagnostics of the changing role of science in increasingly compute-intensive societies. On the one hand, in controversies such as those around Large Language Models (LLMs), it is clear that science has become subject to politicization. Here, key issues that have long been central to the sociology of knowledge are put at stake by societal actors (scientists, journalists, activists): the boundary between science and politics; relations between research and advocacy, and the societal consequences of techno-scientific advances (Roberge and Castelle, 2020). Classic questions of the sociology of knowledge can thus be seen to have gained renewed relevance. However, on the other hand, AI controversies suggest that sociology can no longer rely on controversies to render visible, and analys-able, formative dynamics in science-intensive societies. The configuration of „spaces of problematization”, which social studies of science and technology long assumed to emerge more or less spontaneously in society, can no longer be taken for granted. To find orientation in this situation, I will draw on Susan Leigh Star’s (1989) proposal that we need to develop a Durkheim Test for AI, if we are to contribute towards ensuring that intelligent computational systems serve societal goals.