> The LMM idea of a new theory is a plausible looking sequence of arguments, not an actually correct one. One way you can try to prevent this is just by asking, "Is this correct or did you just make this up?" And sure enough, most of the time theyʼll just admit they made it up. So my verdict for the moment is itʼs a mixed bag. The current models are very much stuck to the existing literature which isnʼt useful if you want to do something new. If you kick them enough, they will eventually agree to do anything, but then you canʼt trust them
Which is very generous, given that «do something new» is an euphemism for "thinking", and that other people's attempts to have LLMs correct their own output are not returning «most of the time» success - she only refers to "have you simply invented that", probably not to "is this past output precise and accurate".
> The LMM idea of a new theory is a plausible looking sequence of arguments, not an actually correct one. One way you can try to prevent this is just by asking, "Is this correct or did you just make this up?" And sure enough, most of the time theyʼll just admit they made it up. So my verdict for the moment is itʼs a mixed bag. The current models are very much stuck to the existing literature which isnʼt useful if you want to do something new. If you kick them enough, they will eventually agree to do anything, but then you canʼt trust them
Which is very generous, given that «do something new» is an euphemism for "thinking", and that other people's attempts to have LLMs correct their own output are not returning «most of the time» success - she only refers to "have you simply invented that", probably not to "is this past output precise and accurate".