Sqlite does have the fileio extension, which includes an fsdir() function that will traverse a directory.
sqlite> .headers on
sqlite> .mode column
sqlite> SELECT name,mode,mtime FROM fsdir("/usr") where name like '%.h' LIMIT 3;
name mode mtime
------------------------ ---------- ----------
/usr/include/_G_config.h 33188 1607359089
/usr/include/aio.h 33188 1607359089
/usr/include/aliases.h 33188 1607359089
It's not nearly as full featured as this Fselect tool, though. I'm somewhat surprised nobody has cloned or updated the extension and added more stat() fields or things like extended file attributes. It doesn't even have the file size.
In my experience the limiting factor in response time is the traversal of the FS/OS structures in your step 1. It seems unlikely that anything this program is doing would be any slower than what you are describing.
On Windows for example there is Everything search engine which scans NTFS table and installs filter driver. Its instant on any disk size. If it were keeping its database in sqlite, we would have exactly what AtlasBarfed suggested.
I am drawing a distinction between actually using an SQL database as the dynamic attribute store of the file system, vs “dumping the FS to SQLite”, which implies an on-demand traversal to me.
1) dump filesystem (or some other data schema generation) metadata into sqllite 2) pass query to sqllite 3) pass sqllite response to stdout
and it would still be faster than the adhoc query engines like this.