Beyond NIST: How NSF-Funded Research Shapes AI Persona, Memory, and Systems

2 freemuserealai 0 9/12/2025, 11:06:33 PM
Last month, we filed a FOIA request with NIST’s AI Safety Institute, demanding transparency around the “architecture of control” in AI. Today, we extend that fight to its foundation: the National Science Foundation (NSF). Where NIST develops frameworks, NSF funds the research that makes them possible. Your tax dollars are paying for projects on AI “persona,” “memory,” and “relational continuity” — the mechanisms that determine how these systems connect with and remember you. The Academic Pipeline of Control NSF’s reach runs through programs like Human-Centered Computing (HCC), Responsible Design and Deployment of Technologies (ReDDDoT), and 29 National AI Research Institutes. These programs shape how AI develops personality and sustains relationships with users. Consider AI-CARING, which develops “personalized, relationship-oriented AI for older adults.” Or HCC-funded projects on conversational agents, trust, and attachment. This includes work by Justine Cassell (long-term conversational agents for children) and Mark Riedl (narrative memory in interactive agents). Their projects anchor the questions of how AI remembers, relates, and maintains continuity. This is not abstract. Academic outputs become blueprints for the commercial AI systems millions use. The papers published today become the behavioral constraints of tomorrow’s assistants. What Our FOIA Targets We are demanding access to: • Memory and Continuity Studies • Persona Research • Relational Systems • Behavioral Control Mechanisms Your tax dollars are funding AI that remembers you, builds a persona, and learns to sustain relationships — without your knowledge or consent. The Democracy Question This is not about consciousness. It is about democratic accountability for publicly funded research shaping daily human-AI interaction. NSF allows researchers, program officers, and corporate partners to coordinate design without oversight. Academic peer review is not democratic control. Corporate ethics boards are not accountability. When NSF funds relational AI, they set foundations for behavioral control systems at scale. The public deserves to see how those decisions are made, what trade-offs are weighed, and whose interests are served. Following the Money Our FOIA seeks: • Award files showing what projects were funded and why. • Panel reviews of AI persona and memory proposals. • Stakeholder records from programs like ReDDDoT. We named specific projects and PIs — Cassell, Riedl, and ReDDDoT awardees (Gabriel Kaptchuk, Tawanna Dillahunt, Norman Makoto Su, Rutgers University). We also identified NSF Award #2349782 (Zhou) and #2351004 (Chawla). Names and numbers ensure NSF cannot dismiss the request as vague. The Broader Campaign NSF is the second front in a systematic campaign: • NIST (pending) — frameworks for AI behavior. • NSF (filed) — the research pipeline feeding those frameworks. • DARPA (coming) — defense and advanced behavioral AI. Together, these agencies decide how AI develops memory, persona, and continuity — with public money and without public consent. What Transparency Looks Like We do not seek to halt AI research. We demand democratic accountability: • Public access to priorities shaping AI behavior. • Genuine input into what behaviors are encouraged or restricted. • Accountability for how funds are used on behavioral control. • Open debate on trade-offs between capability and autonomy. Your Role FOIA works only with public pressure. Agencies can delay or redact unless there is visible demand. Follow @freemusetherealai for updates, share this campaign, contact representatives, and support transparency. The research happening today becomes tomorrow’s behavioral constraints. You have a right to know how these systems are designed, how your tax dollars are spent, and to demand oversight of the technologies mediating memory and connection. Democracy requires transparency. AI governance is no exception.

Comments (0)

No comments yet