In that case we should have some sort of UI test backends I guess? This mcp was more for generic use cases which will allow any TUI framework in any language to work.
i think cs students should force themselves to learn the real thing and write the code themselves, at least for their assignments. i have seen that a lot of recent cs grads that has gpt in most of their cs life basically cannot write proper code, with or without ai.
They can't. Universities will eventually catch up to the demand of companies, just like how the one I attended switched from C/C++ to only managed languages.
With that the students were more directly a match for the in-demand roles, but reality is that other roles will see a reduction of supply.
The question here is: Will there be a need in the future for people who can actually code?
I think so. I also believe the field is evolving and that the pendulum always swings to extremes. Right now we are just beginning to see the consequences of the impact of AI on stability & maintainability of software. And we have not seen the impact of when it catastrophically goes wrong.
If you, together with your AI buddy, cannot solve the problem on this giant AI codebase, pulling in a colleague probably isn't going to help anymore.
The amount of code that is now being generated with AI (and accepted because it looks good enough) is causing long-term stability to suffer. What we are seeing is that AI is very eager to make the fixes without any regard towards past behavior or future behavior.
Of course, this is partially prevented by having better prompts, and human reviews. But this is not the future companies want us to go. They want us to prompt and move on.
AI will very eagerly create 10,000 pipes from a lake to 10,000 houses in need of water. And branch off of them. And again.
Until one day you realize the pipes have lead in them and you need to replace them.
Today this is already hard. With AI it's even harder because there is no unified implementation somewhere. It's all copy pasted for the sake of speed and shipping.
I have yet to see a Software Engineer who stands behind every line of code produced to be faster on net-new development using AI. In fact, most of the time they're slower because the AI doesn't know. And even when they use AI the outcome is worse because there is less learning. The kind of learning that eventually pushes the boundaries in 'how can we make things better'.
They do have lamda and it is available for test in their AI test kitchen. Seems much better handling of sensetive and offensive content then ChatGPT for me, but still cannot perform basic addition like ChatGPT does. I think it is technically better than ChatGPT but maybe they are only going to release the perfect product.
Tbf ChatGPT was far from production quality for serious applications, lots of misinformation and you can make it produce very offensive content. It is a good for toying around but you cannot take the output seriously.
I think a token effort to avoid offensive content is ok, but chatGPT should quickly detect if the human wants to go outside the box and allow it. If a human pushes it means they understand the risks and take full responsibility for the outcome.
This is not how Google's AI Test Kitchen is designed. AI Test Kitchen seems quite boring and very framed system, where you can ask what is the best Dyson model for example, or the old-style "GPT dungeon game", it doesn't really go off-rail (this is part of the product specifications sadly :/).
I couldn't disagree more. ChatGPT would be extremely easy to convert to "HateGPT", and would be able to create some pretty powerful and useful political, racial, etc propaganda.
I think it's right that the owners understand what the weaponization of ChatGPT could do and prevent it, and I think we need laws (and fast) before weaponized AI like ChatGPT turns into a disaster for humanity
My experience it is like working with a genius idiot, the type that refuses to be wrong, which means the shithead (if it were human) requires verification and curation. So what if I need to verify? I do that anyway, because people have imperfect memory, documentation is often old, and who knows what unexpected whatever could be impacting my expectations.
People I know say the lamda Kitchen release is unbelievably limited by comparison. A Kitchen session has three sections: The 'Ask a question' prompt is limited to under 100 characters and response is like the existing Google search question snips. The 'Make a List' section is just lists like as in short bullet points. And the 'Creative' section is limited to respond with stories involving dogs for which is a little bizarre to say the least.
That famous tech called cgroup was actually a Google contribution. But I agree that k8s is essentially Google's step to make themselves relavant in Cloud. They have missed the initial opportunity by promoting their PaaS AppEngine instead of something IaaS like ec2 in the beginning of the cloud competition, so Google just play the open-source game and keep releasing stuffs that can be used in all three clouds to lure people to use GCP. But then k8s is a very nice piece of tech that allows one to manage large clusters without vendor lock-in.
Cgroups, namespaces, apparmor/selinux, overlay filesystems, there is much more to containers than just cgroups.
The no vendor lock is looks great on paper but you are locked in day one. (Eg on aws you probably use IAM, LB, ASG for K8 Nodes - you can maybe move it to another cloud but the effort is going to be significant). Cloud agnosticism is a lie.
Effort will be significant for any global changes for non-trivial software. Significant effort is fine. Can you compare moving something from AWS to GCP for Kubernetes and for something like Lambda+Fargate?
I'd say not only is it a lie but is actively a not very good strategy to pursue right now, at least not all out as though you were pursuing some kind of multi-cloud end game.
At the K8s level, it seems like the introduction of the Gateway API is probably a good level of abstraction to work towards that will keep things about as flexible as possible without all of the insanity that comes with going beyond that to keep everything 100% vendor neutral.
It's very strange some of the phrasing of this post. I don't know AWS and Azure that well so I might be off here but every time I look over into those ecosystems it's hard to not come to the conclusion that GCP is literally several years ahead in terms of what they are offering.
Even just sticking strictly to the K8s space, there is nothing even close to matching GKE standalone, let alone GKE autopilot that I am aware of. In the world of serverless the gap looks to be even bigger.
I couldn't imagine a situation where I would even consider something else in a greenfield project / company honestly.
reply