Eric Ries here, happy to answer questions about Answer.AI or any of the related themes Jeremy talked about in the announcement post: rapid iteration, R&D, startup governance, long-term thinking, etc.
Excited to see what comes out of this new lab. And if you're interested in joining the cause, please do get in touch. Both Jeremy and I are on this thread and generally reachable.
How do you look at hiring "experienced people" vs. "enthusiastic interns" on something like this? More generally, how quickly do you think the team will grow, and what the ratio should be between the "old" and the "young"?
Very hard to guess how it might all shake out. I would say that both Jeremy and I have an almost fanatical belief in the power of uncredentialed outsiders. So I would guess we will be more looking for curious open-minded generalists more than any specific age or experience level. I do expect we will grow headcount rather slowly, but that doesn’t mean we will launch infrequently
Puja has a few talks on such things, many very related and worth listening to imho. But most relevant: she's been working on a mechanism design to use quadratic funding in an existing hierarchy to move funding power from funders to on-the-ground researchers who best predict "breakthrough research" areas -- i.e. at which intersections. This idea of "breakthrough innovation" is objectively measured and rewarded as "research that becomes highly cited, and which draws together disparate source citations that have never before appeared together."
So the idea is that in successive funding rounds, funding power slowly accrues in the people who best predict where research innovation will appear. Even if that turns out to be *gasp* grad students.
(I'm particularly interested to see Polis, a "wiki survey" tool I've been using since 2016, be used as one of the signals in such a system. It can help make the landscape of beliefs and feelings that ppl bring to the process more legible, especially at the collective level. Which is important, because high-dimensional "feeling data", when placed out-of-scope in other systems, are often a reason why we get trapped in local minima of innovation that inhibit the recombination of ideas.)
I was going to link to Polis after I read the first part of your answer, but I see you’ve beaten me to it. And in so doing you’ve pretty much answered your own question. Thanks!
I am probably a bit too enthusiastic about applications of Polis-like's (in the "when you have a hammer" sense), but there's a bit more to the system's mechanism design than just Polis -- it's just one signal of many during a full-day event format.
I expect some form of the system she describes to be the basis of much research funding in the coming years (following prototypes in more nimble cryptocurrency/governance communities)
There's an upcoming pilot with real funding in late Feb, that I'm excited to be supporting on! If you have time to watch her video, and find it interesting, you should def get in touch with her after that
Hi Eric, I’m a professor of human centered design in the Netherlands and I help train design students to prototype and design new AI user experiences. Could you share some ideas for AI experiences that you don’t have time to pursue but wish other people would explore?
We’ve prototyped many different tools before. However, the space is frankly disorienting because there is so much opportunity. Any suggestions to inspire engineering students to develop useful explorations?
Sure, just some ideas at random, but the most important advice is just to try new things and see what feels good:
- dashboards or other reports that call you when something changes, so you don't have to log in to see what's changed
- extremely personalized settings that remember exactly who you are and what you like to do with the interface, to the point that it basically uses it for you
- rapid prototyping interfaces, doing things like "make it real" demo
- extremely simple apps that use AI in the backend to do amazing things. how about a camera app that just sends everything it sees to GPT4-v. think how much easier that would be than loading up a translator app, taking a picture of a menu, uploading the picture, etc. just figure out what I might want to do based on the fact that I took a photo
- artistic/musical/creative apps that require only your phone and that you can noodle on while you have 5m of idle time. maybe the AI works on it silently in the background and then the user gives notes or feedback whenever they have time. end product is a pro-level artistic work that reflects the user's taste level but the AI's mastery of technique
If you're visiting london and feel inspired by the "make it real" demo, ppl in that circle routinely demo at Maggie Appleton's rad Future of Coding events[1] (and many other talented people building UIs and interfaces).
I expect quite a bit of partnering to make sense, though nothing concrete to share at this time. We explicitly designed this to be non-competitive with the best companies in the field (who have the things they do well covered).
Excited to see what comes out of this new lab. And if you're interested in joining the cause, please do get in touch. Both Jeremy and I are on this thread and generally reachable.