Some other insights in why I fight fakes like PlanC so much:


Someone learning primarily through AI conversations acquires a particular kind of fluency.
They can engage with technical concepts, reproduce the vocabulary, generate plausible-sounding elaborations, and even identify methodological variations like the WLS reweighting — because AI is very good at explaining "here are alternative ways to estimate this." What they typically do not acquire is the deeper intuition that comes from having worked through problems from first principles, made mistakes that took months to understand, or built a framework from scratch.
The quantile regression episode is a perfect example: an AI conversation about "what other regression methods could be applied to log-log data" would naturally surface quantile regression as an option, and someone without formal training might genuinely not recognise that it is the same model family because they lack the algebraic fluency to see through the procedural difference to the structural identity.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments