Show HN: Claude-Powered Survival Analysis with One-Click Models (Onco-Insight)

4 DATAIZE 2 9/10/2025, 2:32:50 AM dataize.me ↗
We built Onco-Insight for clinicians and cancer researchers who need survival analysis and machine learning without having to navigate complicated menus in SPSS/SAS or write code.

Onco-Insight is powered by Claude Sonnet 3.5 (AWS Bedrock), but instead of just providing code snippets, it uses a hybrid UI. You start by describing your task, for example, "KM for OS with stratification by stage; then Cox with age/ER status". The AI agent then suggests a plan, including the models, variables, and checks.

The Human-in-the-Loop (HITL) step allows you to review and confirm the plan. This includes checking the model list, predictors, censoring rules, time/endpoint, and key considerations like competing risks, which can significantly affect survival outcomes.

After a one-click run, Onco-Insight executes the pipeline and provides the results, assumption checks, and an interpretation in plain English. You can then iterate by accepting or refining the agent's next steps, such as proportional hazards diagnostics, RSF benchmarking, or calibration.

Why this approach? We found that while large language models (LLMs) are great at planning, they can be unreliable if they just make you copy and paste code. Our agent + HITL approach gives you control while eliminating boilerplate work. Most runs provide plots/tables, model diagnostics, and a concise draft of the methodology and interpretation.

What's included today? Survival Analysis: Kaplan-Meier (grouped), Cox PH (with PH tests), AFT (select families), RSF (out-of-bag metrics), and time-dependent AUC. Data Guards: Missingness reports, event/censor checks, leakage checks, and basic harmonization to mCODE/FHIR fields when available. Data Sources: You can use your own datasets as well as public data commons like SEER. Outputs: The tool generates figures/tables and an auditable "analysis plan" that shows what was run, the parameters used, and QC steps.

What we'd love feedback on: Are the Human-in-the-Loop (HITL) checkpoints sufficient? (e.g., specifying variable types, time origin, left truncation, competing risks). Which survival/machine learning diagnostics are most important to highlight in the UI? What are the essential outputs needed for clinical journals or registries?

We're still in the early stages and are currently in beta with hospital partners. We're happy to answer any detailed questions you have about validation, reproducibility, and data handling.

Thanks!

Team DATAIZE (https://dataize.me)

Comments (2)

HOO-hoo · 8h ago
What are the most common user corrections during the HITL review step, and how do these improve the agent's subsequent suggestions?
DATAIZE · 8h ago
In the HITL review, users can take action on the agent's analysis plan and execution proposals by confirming them with 'go', 'stop', or 'reject' commands, similar to a cursor.

When a user rejects the agent's proposal, the agent will likely operate within the same scope of tool execution but will not use the specific answer that was rejected.