One really interesting aspect of this that I’ve had some revealing conversations with the data engineers in my org is how to best expose the firehose of data for people in BI tooling. We use dbt in my org, and Metabase as the BI tool, and a lot of thought is put into how to create a clearinghouse that serves the needs of the organization. The current pattern that has been of interest is to ELT into what the data engineers call OBT (one big table). The OBT is cleaned, denormalized, and able to be sliced on. An org might have several of these OBT consisting of various areas of interest. End users then import the OBT in Metabase to drive their filtering and build dashboards. The goal is to reduce reliance on custom SQL scripts and push all of that custom slicing and dicing of the data into Metabase’s front end logic where filtering can be applied dynamically rather than trying to maintain a bazillion sql variants.
Eventually I think we will move into a post ChatGPT world where you’ll give ChatGPT (or whatever equivalent) your schema and a question and it will output the dashboards for you. We aren’t quite there yet though
I like this and I think it's where modern AI will shine the most. Like a clippy but for data.
The (outside of this scope) question is what happens when you feed that decision back into the system. I think the "recursive AI" question has been exhausted though.
Have you evaluated Superset or Lightdash against Metabase? If so I'd love to hear about your experience. I'll shortly be helping a client company migrate BI off Looker and haven't gotten my hands dirty with the options yet.
Eventually I think we will move into a post ChatGPT world where you’ll give ChatGPT (or whatever equivalent) your schema and a question and it will output the dashboards for you. We aren’t quite there yet though