What an odd question – math is not an experimental science in the sense that experiments are required to demonstrate validity. They are required to suggest phenomenon, but the only evidence accepted in math is logical argument, which in principal can and must be replicated by those reading the paper.
Many papers have errors, yes—but our major results generally hold up, even when the intermediate steps are wrong!
This reminds me of a bunch of recent posts on HN about AI and its "reasoning" process.
https://news.ycombinator.com/item?id=44052713
https://news.ycombinator.com/item?id=43673463
An interesting read and take on math papers.
[edit] adding another thought
Maybe LLMs and their reasoning could be used as a model for how a consensus is reached in a group.
What an odd question – math is not an experimental science in the sense that experiments are required to demonstrate validity. They are required to suggest phenomenon, but the only evidence accepted in math is logical argument, which in principal can and must be replicated by those reading the paper.