Show HN: A new alternative to Softmax attention – live GD-Attention demos

2 GhostDrift 1 8/10/2025, 3:48:24 PM zenodo.org ↗
We built two live demos to illustrate the Ghost Drift Theory — a framework for modeling semantic coherence — and a new attention mechanism called GD-Attention.

• Part 1 — Semantic Energy Landscape: Visualize the unique coherence point s* and jump direction g in real time. • Part 2 — GD-Attention vs Softmax: "Softmax blends, GD-Attention selects" — explore the difference interactively.

Paper (with Zenodo DOI): [Ghost Drift Theory & GD-Attention PDF](https://zenodo.org/records/16757311) ▶ Part 1: https://gdt-semantic-energy-demo-jdgoe6gkrohleltjgvwgwq.stre... ▶ Part 2: https://gda-vs-softmax-demo-zooif4cfewrmnv85zaqymy.streamlit...

Would love feedback on clarity, use cases, and potential improvements.

Comments (1)

GhostDrift · 2h ago
Background Softmax attention blends all values weighted by normalized scores. While effective, it inherently diffuses signal — sometimes undesirable in tasks requiring discrete focus.

GD-Attention, derived from the Ghost Drift Theory, takes a different approach: it selects a single coherence point ∗ along a "jump" direction in the semantic space, determined by an underlying energy function.

The attached paper formalizes this mathematically, and the demos let you see and interact with these mechanisms in real time:

Part 1: Energy landscape → find ∗ and Part 2: Compare Softmax vs GD-Attention outputs and selectivity

This is still experimental, and feedback from the community — especially on edge cases, real-world applicability, and potential integration with transformer architectures — would be hugely appreciated.