Again, another prominent and cited anti-AI study with major methodology flaws.
The AI literacy measures are fundamentally flawed. The 25-item scale confuses unrelated concepts, mixing questions about HIPAA regs, PCI DSS standards, and password storage with actual AI knowledge. Question 10 asks about HIPAA sharing rules, question 5 about payment card standards. These measure regulatory knowledge, not AI literacy.
Each study uses correlational designs while making causal claims. The title itself "Lower AI Literacy Predicts Greater AI Receptivity" implies directionality that cross-sectional data cannot establish.
Study 1 uses 27 countries with massive unmeasured confounds l;ike cultural attitudes toward technology, digital infrastructure, education systems to claim AI literacy drives receptivity
Study 2 has a 41.9% attention check failure rate that correlates with AI literacy scores. Authors even acknowledge this creates bias but proceed anyway.
Studies 5 and 6 test mediation through "magical thinking" using cross-sectional data, violating the temporal precedence requirement for mediation. You cannot establish that low literacy → magical thinking → high receptivity without measuring this over time. The Hayes PROCESS macro they use explicitly warns against this.
The "magical thinking" explanation appears nowhere in the preregistrations but is presented as the main theoretical contribution. The explanation is unfalsifiable and contradicts their own data. Study 6 shows people with low AI literacy rate AI as less capable and more fearful yet supposedly use it more because it seems "magical." This is incoherent. why would perceiving something as less capable but magical increase usage?
The biggest flaw is that without establishing what "AI literacy" actually measures, every subsequent analysis is meaningless.
The AI literacy measures are fundamentally flawed. The 25-item scale confuses unrelated concepts, mixing questions about HIPAA regs, PCI DSS standards, and password storage with actual AI knowledge. Question 10 asks about HIPAA sharing rules, question 5 about payment card standards. These measure regulatory knowledge, not AI literacy.
Each study uses correlational designs while making causal claims. The title itself "Lower AI Literacy Predicts Greater AI Receptivity" implies directionality that cross-sectional data cannot establish.
Study 1 uses 27 countries with massive unmeasured confounds l;ike cultural attitudes toward technology, digital infrastructure, education systems to claim AI literacy drives receptivity Study 2 has a 41.9% attention check failure rate that correlates with AI literacy scores. Authors even acknowledge this creates bias but proceed anyway.
Studies 5 and 6 test mediation through "magical thinking" using cross-sectional data, violating the temporal precedence requirement for mediation. You cannot establish that low literacy → magical thinking → high receptivity without measuring this over time. The Hayes PROCESS macro they use explicitly warns against this.
The "magical thinking" explanation appears nowhere in the preregistrations but is presented as the main theoretical contribution. The explanation is unfalsifiable and contradicts their own data. Study 6 shows people with low AI literacy rate AI as less capable and more fearful yet supposedly use it more because it seems "magical." This is incoherent. why would perceiving something as less capable but magical increase usage?
The biggest flaw is that without establishing what "AI literacy" actually measures, every subsequent analysis is meaningless.