No, I think they have explained this to each other (or something like it). But as you suggested, discussion is a lot more likely when there are corner cases or problems.
Exactly, there's a few extra steps between here and there, and it's possible to pick out what those steps are without having to conclude that giving up on all brain research is the only option.
Ah, perhaps I should have said something like "educational materials, and apps, and other useful things" (disapproving judgement in the original).
> Well, the thing is that the educational materials are largely free.
A triumph and fruition of these last decades of massive effort. Now we just need to deal with their quality (with commercial as bad as free). AI may help, by reducing barriers to content creation - you might for example, now more easily author an intro astronomy textbook, one that doesn't reinforce top-30 common misconceptions, something the most used (US; commercial) texts still don't manage.
Sigh. One impact of AI will hopefully be more readily available systemic survey papers. [1] might-or-not be a good place to start... but it's paywalled (by the National Science Teacher Association no less), and I don't quickly see preprints/scihub/etc. Here's an old unordered list for browsing[2], and a more recent one[3]. Trumper did a series of papers asking the same few questions of various populations, to give a feel for numbers - like half not knowing day-night cause. Most lists are on subsets of astronomy, and most info on frequency on short lists. So... it's a mess. As are textbook reviews. Key phrases are "astronomy education research" and "misconceptions".
The one bit I explored was 'what color is the Sun (the ball)'. Asking first-tier astronomy graduate students became a hobby, as most get it wrong (except... for those who had taken a graduate seminar covering common misconceptions in astronomy education). So I libgen'ed the 10-ish most used intro astronomy textbooks in US according to some list. IIRC, it broke down roughly into thirds of: correct (white); didn't explicitly say but given surrounding photos, or "yellow" (as classification without clarification), there's no way students won't be misled; and explicitly incorrect (yellow). Hmm, bulk evaluation of textbooks against some criteria is another thing multi-modal models could help with.
(A musing aside re AI for systemic reviews. Creating one is a structured process. They have been very manpower intensive, so they aren't refreshed as often as is desired, nor consistently available. And at least in medicine ("X should be done in condition Y"), there's a potential for impact. I imagine close reads of papers isn't quite there yet. But maybe a human-AI hybrid process?)
> Systematic reviews are rigorous, transparent, and reproducible research studies that synthesize all existing evidence on a specific topic to answer a focused question and minimize bias. Unlike narrative reviews, they use predefined eligibility criteria, comprehensive searching, and critical appraisal to evaluate primary literature, often employing meta-analysis for quantitative results. [goog ai overview, edited]
Well, we can't just say 'original = intent'. The original artists presumably did the best job of expressing their intent as far as possible in the medium at the time, but that doesn't mean that this necessarily is the best expression of their intent ever.
It's like saying you can only watch the Simpsons with the exact late 1980s / early 1990s ads that they originally aired with, and everything else is sacrilege.
But without asking them it's pure conjecture. I don't think trying to retcon the best expression of their intent needs to be used to justify this project, either. Sometimes it's fun to see if you can build an improvement on what exists, even if it's a vehicle for learning about DSP or whatever domain the learner is in.
reply