Hacker Newsnew | past | comments | ask | show | jobs | submit | zamubafoo's commentslogin

Honest question: Given that the only wide consensus of anything approaching general intelligence are humans and that humans are biological systems that have evolved in physical reality, is there any arguments that better efficiency is even possible without relying on leveraging the nature of reality?

For example, analog computers can differentiate near instantly by leveraging the nature of electromagnetism and you can do very basic analogs of complex equations by just connecting containers of water together in certain (very specific) configurations. Are we sure that these optimizations to get us to AGI are possible without abusing the physical nature of the world? This is without even touching the hot mess that is quantum mechanics and its role in chemistry which in turn affects biology. I wouldn't put it past evolution to have stumbled upon some quantum mechanic that allowed for the emergence of general intelligence.

I'm super interested in anything discussing this but have very limited exposure to the literature in this space.


The advantage of artificial intelligence doesnt even need to be energy efficiency. We are pretty good at generating energy, if we had human level AI even if it used an order of magnitude more energy that humans use that would likely still be cheaper than a human.


Inference is already wasteful (compared to humans) but training is absurd. There's strong reason to believe we can do better (even prior to having figured out how).


That would mean with current resources AI can get so much more intelligent than humans, right? Aren't you scared?


That's a potential outcome of any increase in training efficiency.

Which we should expect, even from prior experience with any other AI breakthrough, where first we learn to do it and then we learn to do it efficiently.

E.g. Deep Blue in 1997 was IBM showing off a supercomputer, more than it was any kind of reasonably efficient algorithm, but those came over the next 20-30 years.


Shame they'll never do it for Warcraft 3 with the remaster still around.


I've been using https://github.com/DNSCrypt/doh-server for serving my DNS server via DOH for at least 2 years. Only had two issues with it and both were due to lack of maintenance on my part (ie. not updating the binary for one and then not re-configuring it after I changed configurations for the upstream DNS).


In production environments that won't give you root access, you won't be exec'ing inside of a pod if you aren't an operator or sysadmin.


In my particular case, I am an operator and sys admin, but I don't give myself root privileges without having to go through some serious hoops, which I only jump through if I really truly need it. If I want root, I have to actually change the kubernetes manifest yaml to allow elevation to root privileges. That's not something that can be done without getting others involved for code reviews and what not.

However, even in the case of general developers, it isn't true. Companies do restrict exec abilities, but we don't. Many startups are the same, because developers are expected to also troubleshoot and debug production issues. If you don't allow shells in pods, you are really binding the hands of your devs.

To be clear, I am not disagreeing with you. You are correct in many cases. But there are a number of exceptions in my experience.


No, you are wrong. I would. The pod would be mine though.


I've thought a lot about these problems and you eventually hit the need for stronger than natural magnets. Without electricity it's a hard challenge, but without magnets creating electricity in a simple bench scale is a lot harder.

I ended up thinking that you'd need to do a chemical battery to bootstrap electricity and then with electricity generate the electromagnet to create stronger magnets and then iterate from there.

Your next stumbling block from there would be optics as everything else can be made with horrible tolerances. Even lathes and similar machinery can be made with pretty good tolerances without optics. But when you start needing time keeping or miniaturizing components for improved efficiencies, it becomes a blocking issue.

You also need to discover photo-reactive elements to do lithography, but that's a lot easier since it's just silver nitrate and you'd already have the components when you are working towards the initiate bootstrap battery.


would you need to rediscover the table of elements and atomic theory in your version of things? There's a lot of a scientific learning we take for granted that is actually important when building a new civilization from scratch.


How far can you get - there is a lot I know how to do but won't have time to create before I die


I made something like this since I was tired of the asymmetric nature of data collection that happens on the Internet. Still not where I would like to be, but it's been really nice being able to treat my browsing history as any old log that I can query over. Tools like dogsheep are nice, but they tend to rely on data being allowed to be removed from the platform. This bypasses those limits by just doing it on the client.

This lets me create dashboards to see usage for certain topics. For example, I have a "Dev Browser" which tracks the latest sites I've visited that are related to development topics [1]. I similarly have a few for all the online reading I do. One for blogs, one for fanfiction, and one for webfiction in general.

I've talked about my first iteration before on here [2].

My second iteration ended up with a userscript which sends the data on the sites I visit to a Vector instance (no affiliation; [3]). Vector is in there because for certain sites (ie. those behind draconian Cloudflare configuration), I want to save a local copy of the site. So Vector can pop that field save it to a local minio instance and at the same time push the rest of the record to something like Grafana Loki and Postgres while being very fast.

I've started looking into a third iteration utilizing MITMproxy. It helps a lot with saving local copies since it's happening outside of the browser, so I don't feel the hitch when a page is inordinately heavy for whatever reason. It also is very nice that it'd work with all browsers just by setting a proxy which means I could set it up for my phone both as a normal proxy or as a wireguard "transparent" proxy. Only need to set up certificates for it work.

---

[1] https://raw.githubusercontent.com/zamu-flowerpot/zamu-flower... [2] https://news.ycombinator.com/item?id=31429221 [3] http://vector.dev


Or it just all happens on the client side before it even hits the Internet. I would love if Firefox allowed users to use Postgres instead of sqlite to store their places.sqlite database.


No, but I also believe that it's not so expensive that someone couldn't cover it if it was their hobby. This is doubly true if it's something like HN where it's not trying to scale to infinity.


The right pattern is to put them directly in a queue to talk to a person, but have an system (AI or otherwise) in the queue to gather the minimal information. Like having the person explain the problem (and have something transcribe it) and have the system transfer them to the appropriate team after parsing their problem.

Or for really common cases (ie. turn it on and off, you're affected by an outage, etc), redirect them to an prerecorded message and then let them know that they are still in the queue and can wait for a person. 9/10 it'll solve everything, but also reduce friction of simple things that might be answered.


Yes, though you do have to enable that. For me I tend to have it buffer the video. That way I can even seek backwards of web streams.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: