Made an agentic interface for my university's Learning Management System this past semester. Project was rejected by my professor and failed spectacularly.
Aimed to make an LMS so good, you don't need to use - literally.
Check deadlines, find missed submissions and so much more.
All of this, through simple prompting.
My uni is called NUST so I made a better LMS for it called NUTS lms.
https://www.nutslms.com. It's a chrome extension that serves as agentic interface for my University's LMS.
The AI Agent is called "Deez Nust" and it can do pretty much everything a human can do:
- Check deadlines
- Download assignments
- Even do a quiz for you
Is the tech good enough for it to be undetectable.
If I were to hypothetically use it to complete the handwritten assignments that my old school professors demand (in much of the yappier courses) would this slip through without getting caught.
This is all hypothetical of course.
We are moving up the abstraction hierarchy.
Most programmers don't write low-level code in today's age.
We kept moving up and now we have natural language programming.
The programmers of tomorrow will mostly be using nl to program, that doesn't mean that there won't be "lower-level" programmers maintaining things.
As for the critical part, the way I see it is that AI will excel and readily be applied in low-stake environments (Frontend, Games, etc) where you can simply loop the AI agent until it clicks and gets the right output.
For high stake environments (healthcare, self-driving, etc) AI will reluctantly be used (for the foreseeable future) because the error cost is too high.