Yeah exactly that. Currently an admin Kubeconfig is exposed but proper user management will follow. From there, you are really in full control. We aim to make the repetitive stuff easy and leave the custom stuff up to you. You will have full control of the cluster.
As for custom configs, yeah we expose flags and config params to populate things that must be changed, like max_session in a db or innodb_buffer_pool, etc. But you are able to set any custom flags you want via console.
I’m waiting for a llm focused language. We’re already seeing AI is better with strongly typed languages. If we think about how an agent can ensure correctness as instructed by a human, as the priority, things could get interesting. Question is, will humans actually be able to make sense of it? Do we need to?
I'm pretty sure, ChatGPT could write a program in any language, which is similar enough to existing languages. So you could start by translating existing programs.
Yes. The learning comes from running tests on the program and ensuring they pass. So running as an agent. Tests and compiler give hard feedback- thats the data outside the model that it learns from.
I think modern RLHF schemes have models that train LLMs. LLMs teaching each other isn't new.
It’s basically called “reinforced learning” and it’s a common technique for machine learning.
You provide a goal as a big reward (eg test passing), and smaller rewards for any particular behaviours you want to encourage, and then leave the machine to figure out the best way to achieve those rewards through trial and error.
After a few million attempts, you generally either have a decent result, or more data around additional weights you need to apply before reiterating on the training.
Defining the goal is the easy part: as I said in my OP, the goal is unit tests passing.
It’s the other weights that are harder. You might want execution speed to be one metric. But how do you add weights to prevent cheating (eg hardcoding the results)? Or use of anti-patterns like global variables? (For example. Though one could argue that scoped variables aren’t something an AI-first language would need)
This is where the human feedback part comes into play.
It’s definitely not an easy problem. But it’s still more pragmatic than having a human curate the corpus. Particularly considering the end goal (no pun intended) is having an AI-first programming language.
I should close off by saying that I’m very skeptical that there’s any real value in an AI-first PL. so all of this is just a thought experiment rather than something I’d advocate.
With such learning your model needs to be able to provide some kind of solution or at least approximate it right off the bat. Otherwise it will keep producing random sequences of tokens and will not learn anything ever because there will be nothing in its output to reward, so no guidance.
I don’t agree it needs to provide a solution off the bat. But I do agree there is some initial weights you need to define.
With a AI-first language, I suspect the primitives to be more similar to assembly or WASM rather than something human readable like Rust or Python. So the amount of pre-training preparation would’ve a little easier since syntax errors due to parser constraints.
I’m not suggesting this would be easy though haha. I think it’s a solvable problem but that doesn’t mean it’s easy.
I've wondered about this too. What would a language look like if it were written with tokenization in mind, could you have a more dense and efficient form of encoding expressions? At the same time, the language could be more verbose and exacting because a human wouldn't bemoan reading or writing it.
I have found Grafana to be a decent product, but Prom needs a better horizontally scalable solution. We use Vector and Clickhouse for logging and works really well.
All of them provide a way to scale monitoring to insane numbers. The difference is in architecture, maintainability and performance. But make your own choices here.
Before, I remember there was m3db from Uber. But the project seems pretty dead now.
And there was a Cortex project, mostly maintaned by GrafanaLabs. But at some point they forked Cortex and named it Mimir. And Cortex is now maintained by Amazon and, as I undersand, is powering Amazon Managed Prometheus. However, I would avoid using Cortex ecaxctly because it is now maintained by Amazon.
Yeah exactly that. Currently an admin Kubeconfig is exposed but proper user management will follow. From there, you are really in full control. We aim to make the repetitive stuff easy and leave the custom stuff up to you. You will have full control of the cluster.
As for custom configs, yeah we expose flags and config params to populate things that must be changed, like max_session in a db or innodb_buffer_pool, etc. But you are able to set any custom flags you want via console.