Hacker Newsnew | past | comments | ask | show | jobs | submit | more 999900000999's commentslogin

I'm a little confused as to why projects like this support macOS since at a minimum it's a TOS violation.

Doing stuff like this, and integrating it into the main project puts the whole thing at risk.

The only real reason to MacOS is it's tight integration with Mac hardware.

Weird flex...


I don't see any legitimate existential risk for the devs or project?

ToS are only relevant for those who are a party to it - this is the users responsibility.

Historically, while Apple is protective of their IP they have not been acting like Nintendo with regards to emulators and hackintosh and such in court.

Weird concern unless you can point to actual threats or precedence.

> Weird flex

On the contrary, it's a legitimately useful feature that has popular demand.


It's arguably violating Apple's IP rights, they have taken down similar projects.

https://www.xda-developers.com/i-tried-running-macos-inside-...

In the article it's mentioned a docker version of this got a dmca takedown.

Apple does not license MacOS for use on hardware they don't sell. It can be argued this feature does not have any legitimate functionality.

At the same time, if you must decide to violate the license terms of OSX it should be done in a separate fork.

GitHub will just delete the whole project if Apple ever catches wind of it and complains. The DMCA isn't exactly a court proceeding, usually the content host determines the juice isn't worth the squeeze.

That sucks for everyone who decides to use it for legitimate purposes.

To be blunt Apple gate keeps there software and build tools behind expensive hardware. If you disagree, use different software.

Edit: Given I'm 90% sure they aren't running Arm OSX anyway, this is going to be irrelevant in about 2 years.


> Apple does not license MacOS for use on hardware they don't sell. It can be argued this feature does not have any legitimate functionality.

Anymore. They don’t license it anymore. And the answer is money. So much money it is part of one of SV’s favorite success stories.

They don’t appear to give a ahit about hobby use. If you are running macOS commercial on non Apple hardware they will figuratively murder you.

And you’re allowed to run macOS VMs on top of macOS hosts, so the functionally is sound in that context


> I'm a little confused as to why projects like this support macOS since at a minimum it's a TOS violation.

As others have pointed out, emulating macOS is only a ToS violation if done on non-Apple hardware, and this tool supports macOS. There is a legitimate usecase of running macOS VM on macOS.

Sure, you could use some apple-provided emulation tool instead of QEMU but that's a matter of choice, not a violation of ToS.

Disclaimer: I'm not a lawyer. This is not legal advice.


It’s for running MacOS on Apple hardware. Apple has their own container tech now, Qemu just makes it easy to run everything else including MacOS

But an emulator should be able to run it on any other hardware where the emulator suite runs, imho.

Otherwise it's not an emulator but some kind of pass-through mechanism.


weird reaction

It's just a cool piece of merch to me.

Artist make no money off streaming. This is a real artifact I get to own, keep sealed and maybe get signed.

I did have the unfortunate experience of buying a D12 Devil's Night vinyl to find the cover image quality to look like some intern copied it off Google images.


Nothing wrong with a self interested donation.

He donates, the engine remains a high quality tool, he doesn't have to write the whole stack himself.

Theirs a Flat Red Ball fork that can even build C# to Web. Hopefully these solutions can be shared with Godot so theit C# web export works.


I was more thinking these data centers will become homeless shelters.

Or, we can expect better from software. Maybe someone can fork Firefox and make it run better, hard cap how much a browser window can use.

The pattern of lazy almost non existent optimization combined with blaming consumers for having weak hardware, needs to stop.

On my 16GB ram lunar lake budget laptop CachyOS( Arch) runs so much smoother than Windows.

This is very unscientific, but using htop , running Chrome/YouTube playing music, 2 browser games and VS code having Git Copilot review a small project, I was only using 6GBs of ram.

For the most part I suspect I could do normal consumer stuff( filing paperwork and watching cat videos) on an 8GB laptop just fine. Assuming I'm using Linux.

All this Windows 11 bloat makes computers slower than they should be. A part of me hopes this pushes Microsoft to at least create a low ram mode that just runs the OS and display manager. Then let's me use my computer as I see fit instead of constantly doing a million other weird things.

We don't *need* more ram. We need better software.


> hopes this pushes Microsoft to at least create a low ram mode

Windows OS and Surface (CoPilot AI-optimized) hardware have been combined in the "Windows + Devices" division.

> We don't *need* more ram

RAM and SSDs both use memory wafers and are equally affected by wafer hoarding, strategic supply reductions and market price manipulation.

Nvidia is re-inventing Optane for AI storage with higher IOPS, and paid $20B for Groq LPUs using SRAM for high memory bandwidth.

The architectural road ahead has tiers of memory, storage and high-speed networking, which could benefit AI & many other workloads. How will industry use the "peace dividend" of the AI wars? https://www.forbes.com/sites/robtoews/2020/08/30/the-peace-d...

  The rapid growth of the mobile market in the late 2000s and early 2010s led to a burst of technological progress..  core technologies like GPS, cameras, microprocessors, batteries, sensors and memory became dramatically cheaper, smaller and better-performing.. This wave of innovation has had tremendous second-order impacts on the economy. Over the past decade, these technologies have spilled over from the smartphone market to transform industries from satellites to wearables, from drones to electric vehicles.

> RAM and SSDs both use NAND flash and are equally affected by wafer hoarding, strategic supply reductions and market price manipulation.

Why on earth you think RAM uses NAND flash ?


Sorry, still editing long comment, s/NAND flash/memory wafers/.

> Maybe someone can fork Firefox and make it run better, hard cap how much a browser window can use

Browsing web requires more and more RAM each year but I don't think browsers are the main reason - sites use more and more JS code. With a hard cap many sites will stop working. Software bloat is a natural tendency, the path of least resistance. Trimming weigh requires a significant effort and in case of web - a coordinated effort. I don't believe it could happen unless Google (having a browser with >60% market share) will force this but Google own sites are among worst offenders in term of hardware requirements.


> I was only using 6GBs of ram.

Insane that this is seen as "better software". I could do basically the same functionality in 2000 with 512mb. I assume this is because everything runs through chrome with dozens more layers of abstraction but


More like 128MB.

512MB in 2000 was like HEDT level (though I'm not sure that acronym existed back then)


512MB weren't that odd for multimedia from 2002, barely a few years later. By 2002 256MB of RAM were the standard, almost a new low-end PC.

64MB = w98se OK, XP will swap a lot on high load, nixlikes really fast with fvwm/wmaker and the like. KDE3 needs 128MB to run well, so get a bit down. No issues with old XFCE releases. Mozilla will crawl, other browsers will run fine.

128MB = w98se really well, XP willl run fine, SP2-3 will lag. Nixlikes will fly with wmaker/icewm/fvwm/blackbox and the like. Good enough for mozilla.

192MB = Really decent for a full KDE3 desktop or for Windows XP with real life speeds.

256MB = Like having 8GB today for Windows 10, Gnome 3 or Plasma 6. Yes, you can run then with 2GB and ZRAM, but, realistically, and for the modern bloated tools, 8GB for a 1080p desktop it's mandatory. Even with UBlock Origin for the browser. Ditto back in the day. With 256MB XP and KDE3 flied and they ran much faster than even Win98 with 192MB of RAM.


Win10 can work with 8GB DDR4 RAM.

Win11, on the other hand, meh..

Though Win10 will stop getting updates, but M$ is mistaken if it thinks it can force customers to switch to more expensive, buggy, bad performance Win11.

That's why I switched to Linux for my old PC (a cute little Sony Viao), though it worked well with Win10. Especially after I upgraded it to an 1TB SATA SSD (since even old SATA1.0 socket works with newer SATA SSDs, as SATA interface is backward compatible; it felt awesome to see a new SSD work perfectly in a 15years old laptop), some additional RAM (24GB (8+16) - 16GB repurposed from another PC), and a new battery (from Amazon - it was simply plug and play - simply eject the old battery from its slot and plug in the new battery).

I find it refreshing to see how easy it was to upgrade old PCs, I think manufacturers are deliberately making it harder to repair devices, especially mobile phones. That's why EU and India were forced to mandate the Right to Repair.


You're right. I thought I was misremembering....

Let's not let perfect be the enemy of good.

I was being lazy, but optimized I guess I could get down to 4GB of ram.


> Or, we can expect better from software. Maybe someone can fork Firefox and make it run better, hard cap how much a browser window can use.

You can already do this. For example, I use `systemd-run` to run browsers with CPU quotas applied. Firefox gets 400% CPU (i.e. up to 4 cores), and no more.

Example command: systemd-run --user --scope -p CPUQuota=400% firefox


You can impose CPU restrictions in Windows 10 or 11 too...

You can limit CPU usage for a program in Windows by adjusting the "Maximum processor state" in the power options to a lower percentage, such as 80%. Additionally, you can set the program's CPU affinity in Task Manager. Please note this will only affect the process scheduling.

You can also use a free tool like Process Lasso or BES to limit the CPU for a Windows application. You can use a free tools like HWInfo, SysInternals (ProcMon, SysMom, ProcDump) to monitor and check for CPU usage, especially to investigate CPU spikes caused by rogue (malware or poor performance) apps.


CPU affinity? I haven't been able to change priority in task manager since window 8 i think. Cpu affinity seems only to allow which cores get assigned...not really good management.

Process Lasso worked for me few years back when I needed to do CPU cores restriction for an old program.

Yeah I'm sure that will happen, just like prices will go back down when the stupid tariffs are gone.

That's not happening though, hence why we need more ram.

Eh? As I see it, we've got options.

Option A: We do a better job at optimizing software so that good performance requires less RAM than might otherwise be required

Option B: We wish that things were different, such that additional RAM were a viable option like it has been at many times in the past.

Option C: We use our time-benders to hop to a different timeline where this is all sorted more favorably (hopefully one where the Ballchinians are friendly)

---

To evaluate these in no particular order:

Option B doesn't sound very fruitful. I mean: It can be fun to wish, but magical thinking doesn't usually get very far.

Option C sounds fun, but my time-bender got roached after the last jump and the version of Costco we have here doesn't sell them. (Maybe someone else has a working one, but they seem to be pretty rare here.)

That leaves option A: Optimize the software once, and duplicate that optimized software to whomever it is useful using that "Internet" thing that the cool kids were talking about back in the 1980s.


There is plenty of well optimised software out there already, hopefully a ram shortage can encourage people to seek it out. Would be nice if there were some well curated lists of apps. Sort of like suckless but perhaps a little less extreme. A long standing problem in the software industry is developers havein insanely overspecced machines and fat interne popes leading to performance issues going unnoticed by the people that should be fixing them. The claim that they the need that power to run their their code editor and compiler is really only a need for a code editors and compilers that suck less. I've always ran a 10 year old machine (I'm cheap) and had the expectation that my debug builds run acceptably fast!

Stride has a fraction of the features as unity.

Godot is the only real open source competitor, their C# support is spotty. If I can't build to Web it's useless for game jams as no one should be downloading and running random binaries.

A real sandbox solution with actual GPU support is needed.


Writing C# in godot is a bad choice. Use GDScript and directly write c++ as a module. Skip "HD extension" completely. Godots build system is easy enough to use. Just add new classes to the engine using c++ if you don't want to use GDScript. The GDScript workflow is honestly great. Using C# is like the worst of all worlds.

GDScript is not very maintainable as the code base grows. It lacks proper refactoring tools (e.g. the ones from Jetbrains Rider), static type checking, flexible object system and many 3rd party libraries which might be needed

My main point is: if GDScript isn't good enough, go straight to c++ directly in the Engine.

I won't even get into how big of projects I've written in GDScript successfully.


I don't want to do manual memory management and pointer handling

I don't want to have any sort of undefined behavior

I want to have quick code reload button in the editor

I want to still rely of the engine official documentation with examples like it is with GDScript and C#


You likely won't need to do manual memory management nor think about undefined behavior. If your writing basic c++ to handle the simulation in a game, it's going to be pretty orthodox and these problems likely won't manifest.

The purpose of recommending c++ here is:

If GDScript is too slow, reach directly for C++.

I'm specifically recommending GDScript over C# for ease of use and c++ over C# for performance.


I don't like C++.

It's very difficult to me, I generally stick to high level stuff , C#, JavaScript, Python, Dart, etc.


If you can code in C#, how is C++ difficult? Are pointers and the stl that difficult?

Not denigrating, genuine question.


One word.

Headers.

That's just the start. The C++ build system and package managers are the stuff if nightmares. Modern languages are significantly easier to use.

Don't get me wrong, if you offer a job with a 200k base salary and give me 6 months to learn C++ I'll do it. But I won't enjoy it, and I definitely won't do it as a hobby.


If you use an existing template (and are willing to use scons) GDExtension doesn't really have the standard build problems of rigging everything up with CMake/etc in my experience. The template is set up to handle the "set up the build" problem for you. Still have the header problem though cannot deny that one.

You're asking if going from a high level language to a low level language on steroids is difficult?

Pointers, manual memory management, and the design by committee standard library are not fun.


I write mostly backend stuff for a living, big chunk of it in Node/TS but also C# with modern .NET. I also have to dabble with Unity and Unreal both for work and a hobby project. I technically learned C++ in uni but really, I hate every single second I have to spend doing Unreal Engine work. I genuinely despise how obsolete and hard to write C++ is compared to modern languages. It didn't bother me in university because that was all I knew at the time but it's hard to justify existence of header files today. Add macros everywhere, really bad compilation errors, really hard to read syntax with a lot of unnecessary bloat and you get something that is just not likable. I'm sure you can get used to it given enough time spent in that ecosystem, but I can tell you as someone writing 4 different languages on day to day basis, C++ is difficult, and it's not because of pointers.

Yeah that's why GDScript is great.

You only dabble in the c++ for the sliver of the project that needs it. 90% of game development is animating stuff and user interface development. GDScript is great for that.


Java was designed to make fun of C++, then C# was designed to make fun of Java, so you're missing two layers of fun.

I can tolerate Java. I've worked a Java dev role recently.

I think it's overly verbose and probably has a lot of unneeded legacy stuff, but in terms of making money so I can afford donuts, it's not bad.

My personal favorite language is probably Dart, but it's basically useless outside of Flutter and I don't have faith in Google to keep Flutter going.

I don't like low level programming. My dream language is literally an LLM that processes a markdown document where I sorta just describe what I want to happen. With options to call defined functions in a normal programing language

If I had money ( VC money) I'd be working on this.


Problem is, GDScript still doesn't have at least a JIT, and the whole GDextensions boilerplate seems more complicated than it needs to be.

"not having a JIT" is not a problem, it's you speculating that a JIT will improve performance, the real problem is "GDScript has poor performance", which in this context (me saying C# in godot sucks) is you speculating that C#'s performance is better than GDScripts. Do you have any data to back that claim up? Like real world data from a real product? Or are you just speculating with vibes?

If performance is a concern, skip C# and go straight to c++. Now your ripping at max speed with the smallest binary! That's my whole point. GDScript + c++ is my point. Ditch C# it's not worth the squeeze.


Some people like nice tooling with performance.

Interpreter code is never faster than a dynamic compiler, otherwise what about doing games in Python?

As mentioned on my comment, GDextension experience kind of sucks.


Lots of excellent games are written in Python, Lua, JavaScript and GDScript. These abstract criticisms are just that, abstract.

I'm offering real practical advice from experience of having worked on real projects.

I'll make it real clear:

GDScript & c++ > C#


Are there technical reasons to prefer GDScript over C#?

GDScript is undoubtedly better integrated in the engine, but I would have expected C# compare more favorably in larger projects than the game jam sized projects I have made.



I don't see how this article could possibly support the argument that C# is slower than GDScript

It compares several C# implementations of raycasts, never directly compares with GDScript, blames the C# performance on GDScript compatibility and has an strike-out'ed section advocating dropping of GDScript to improve C# performance!

Meanwhile, Godot's official documentation[1] actually does explicitly compare C# and GDScript, unlike the the article which just blames GDScript for C#'s numbers, claiming that C# wins in raw compute while having higher overhead calling into the engine

[1]: https://docs.godotengine.org/en/stable/about/faq.html#doc-fa...


My post could have been a bit longer. It seems to have been misunderstood.

I use GDScript because it’s currently the best supported language in Godot. Most of the ecosystem is GDScript. C# feels a bit bolted-on. (See: binding overhead) If the situation were reversed, I’d be using C#. That’s one technical reason to prefer GDScript. But you’re free to choose C# for any number of reasons, I’m just trying to answer the question.


At least in my case, I got curious about the strength of /u/dustbunny's denouncement of Godot+C#.

I would have have put it as a matter of preference/right tool with GDScripts tighter engine integration contrasted with C#'s stronger tooling and available ecosystem.

But with how it was phrased, it didn't sound like expressing a preference for GDScript+C++ over C# or C#++, it sounded like C# had some fatal flaw. And that of course makes me curious. Was it a slightly awkward phrasing, or does C# Godot have some serious footgun I'm unaware of?


Makes sense! I think dustbunny said it best: C# is “not worth the squeeze” specifically in Godot, and specifically if you’re going for performance. But maybe that’ll change soon, who knows. The engine is still improving at a good clip.

I think WebAssembly could become that sandboxed solution. .NET Blazor WASM is moving away from mono to CoreCLR (just like Unity, with an early preview in late 2026). WASM now has really good SIMD support, native GC, large memory support, and with WebGPU we could finally see some native WASM games running at native speeds.

Wait, what, WASM has native GC now?

It's still not JS-level/JS-compatible GC (yet?) and it is still quite low level (more about corralling buffers of bytes than objects, a bit closer to OS-level page management than JS or C# level GC), as it is intended to be lower level than most languages need so that different languages can build different things with it. It is also a tiny stepping stone to better memory sharing with JS APIs (and the eventual goal of WASM "direct DOM"), but still not quite finished on that front as more steps remain.

Godot 4 C# web export is coming soon : https://github.com/godotengine/godot/pull/106125

We'll see when it actually ships.

I've seen this issue before, they're making progress but theirs no firm release date.

Plus you then have to extensive testing to see what works in Web builds and what doesn't. I REALLY enjoy vibe coding in Godot, but it's still behind Unity in a lot of ways.


Naive question but why not use GDscript? I haven't had any issues with it

I'll vibe code stuff in GDscript, and I guess I more or less understand some of it.

But C# is what I've used for over a decade. C# has vastly better IDE support. It's a more complete language.

Plus a lot of C# assets/libraries don't really have GDScript counterparts.


I'll add that C# have better performances than gdscript. It doesn't make a difference for most of the things you code in a game, but it comes in handy when needed.

I'd caveat this slightly as "it depends"

For mathy stuff, 100% c# is going to be better. But if you need to round trip to the engine a lot getting stuff in and out of the dotnet heap can actually hurt performance. You also have to be _really_ careful because there are a lot of cases you generate accidental garbage (biggest one is when you use strings that are getting implicitly converted to StringNames every time you run a function, you can avoid this by pre-generating them as consts but I've run into a fair few people who never ran dotmemory or the like to see the issues).


Yes, it tooks me 2 years to see how much garbage strings conversion to String Names generates and how a fool I was calling something like Input.IsActionPressed("move_right") every frame (sadly it's the example given in the input documentation).

Yup. I remember running dotmemory on a whim and being confused by all the stringnames until I noticed what was in them. They really should put that in the docs to just make a const stringname somewhere. I use a global static class for anything I want in multiple files. But I also tend to just use statics instead of autoloads if I'm doing everything in c#.

What's the end game?

AI sloop ads for dating apps full of ai chat bots , YouTube watched by AI bots.

I was a bit surprised Spain has the most subscribers to ai sloop. Kinda weird considering the population size compared to the US


The end game is eliminating the last remaining human element in the engagement optimization pipeline, so that the corporations can control 100% of it.

Platforms like YouTube and TikTok already have almost full control of how the majority of users spend their time on their platforms. They open the app, they immediately get a feed of content algorithmically selected to keep them on the app for as long as possible. They don't need to search, they don't need to think about what they want to watch, they just consume. Fully automated consumption with 0 human effort involved.

Well, almost. There's one last thing remaining: you still need humans to produce the content that you then put on people's feeds. Or rather, needed. Now that the actual production can also be automated, those platforms no longer need to put effort into finding existing human-created content that will keep people watching - they can just generate new, algorithmically perfect content. This is their endgame.


Other advantages to generating content: (1) fewer copyright issues. (2) No creators to pay, just GPU bills scaling with the use of the platform. (3) a much smaller critical mass.

Given these advantages I expect the current "social media" to be replaced with a new one, rather than them pivoting. The next big thing after tiktok might be something that only has generated content, where a last final bit of "social" is taken out of "social media".


Generating stuff is very cheap compared to building and training the model. When you have your model done you're incentivized to use it as much as possible. Maybe even considering the sunken costs.

Why can't it be the actual tiktok? By simply winning competition with humans i.e. whereas vast majority of humans see their pay go too low to bother to continue?

The end game is that computer generation beats human content for a subset of the population who becomes accessible to advertisers and propagandists through model alignment, and unresponsive to word-of-mouth.

> What's the end game?

What makes you think that there is an "end game"?

Someone figured out how to make computers be able to create content that is costly to distinguish from human-made content. Someone else is using it to pump out AI slop in the hopes that it will make them a quick buck. The platform becomes unusable for anyone that values their own sanity. No "end game" to be found.

AI will be the worst thing that happened to society in a very long time.


IDK about the worst thing.

To flip it around AI can help a doctor find a cure, the other day I used AI to help translate and assist a non English speaker find the right bus.

AI is just a tool. It's up to us to determine how it's used.

I know I'm not using any service or product that's so lazy as to use AI sloop to advertise. Especially if it's clearly AI with jumbled text.


There is no “end game” it’s just hustlers out for a buck for themselves — YouTube, the slop makers, the ad companies, the bots scraping videos, all of em.

Same as it ever was. When this cash cow proves worthless or runs out it’ll be another thing.


YouTube always had a slop problem. Previously it took time, people, money and effort to create a slop channel. That’s why it was mostly run by content farms based out of Eastern Europe. For example 5 minute craft based out of Cyprus. With AI creating slop and content farming is easier than ever. Now it can be run at scale by anyone.

As such the only purpose is to make money.

YouTube has started banning some AI channels and maybe with time put serious restrictions. Or maybe not, at least not for big channels because 5 min crafts continues to exist despite producing most bizarre, outrageous and down right dangerous content. YouTube needs that sweet advertisement money.


Its channels from Spain, so presumably appeals to Spanish speaking countries.

the 'slop' is generally at either end of the extremes of video length, either shorts or multiple hour videos.

shorts get paid by the view, ppl put on long videos to fall a sleep to and youtube premium does a rev share based on watchtime of the premium user.

this is why you have like 10 hour playlists and white noise videos.


I have noticed people on the subway watching them. I sneaked a peak on a few of their phones and it was legit AI slop with clear signs (for me at least) that it was AI generated. The end user (viewer?) seemed hooked but they are mostly shorts (10-20sec videos) and you can see their fingers swiping to the next one.

The other day my mother told me if I watched some random AI slop (Putin getting in a physical fight with Trump) and I asked her why she watches this stuff and her answer is that it comes up in her feed. She said it was funny.

I don't know what to make of any of this.


People do not want reality; they want to comfort.

People do not want reality; they want comfort.

I'm observing this with my dad. He watched some genuine history documentaries on YouTube, and now the feed keeps showing him AI slop history videos. You know the type: an hour long, overdramatic AI voice with pronunciation issues, barely relevant stock footage, loads of factual mistakes and nonsensical sentences, epic Hollywood music on a loop...

He knows it's crap, keeps making fun of the horrible AI voice, shaking his head about all the vapid bullshit it's saying, and yet, he keeps watching them.

I'm also at a loss.


Getting screwed over is a part of life.

The problem imo is when you actively lie to me. I won't go into specifics, but I was lied to , said F em, went the legal route and got smacked down.

It's not even worth a name and shame. Just sip a shot of whiskey and try to move on. This is why I like contract jobs. Ain't no equity. It's much more honest.


If you need a job for things like food and housing a startup is cool.

I fully expect to be lied to repeatedly though about my own pay, our prospects, etc. I had to learn the hard way that these lies are defacto legal because employees won't realistically be able to sue.

But hey, the base pay is probably enough.


In that case working at a startup would be a thing someone would only do as a last resort, and the talent pool would consequently be extremely low quality. Sounds damaging to the scene to me.

Have you seen the tech market?

A lot of good engineers are out of work. They'll gladly take what they can


We're going to be in our 70s still writing code because LLMs will dumb down the next generation to the point where they won't be able to get software to work.

Which luckily coincides with our social security and retirement systems collapsing.


Excellent prediction. Seems like it always happens.

In a couple years I'll be in my 70's and starting to write code again for this very reason.

Not LLMs though, I've got my hands full getting regular software to perform :\


For fun ?

Or do you actually need the money.

In my 20s I wanted to retire by 40. Now in my 30s I've accepted that's impossible.

I like programing and working on projects, I hate filing TPS reports all day and never ending meetings.


>For fun ?

Good question, but God, no.

Just to get more out of the electronics where others can't match what I had decades ago. Things have come a long way but icing on the cake is still needed for a more complete solution, and by now it's more clear than ever what to do.

Actually the first year after "retiring" from my long-term employer was spent on music servers as a hobbyist. Then right back to industrial chemical work since. It's been nice not to have any bosses or deadlines though.

>Or do you actually need the money.

Not really, actually waiting until 70 to collect Social Security so I will get the maximum available to me, and haven't even started drawing from my main retirement fund. I plan to start my second company funded entirely by the Social Security though.

>In my 20s I wanted to retire by 40. Now in my 30s I've accepted that's impossible.

This is one area where I am very very far from the mainstream. I grew up in a "retirement community" known as South Florida. Where most people have always been over 65. Nothing like the 50 states from Orlando on up. Already been there and done that when I was young and things were way more unspoiled. When I was still a teenager (Nixon Recession) we were some of the first in the USA where it was plain to see that natives like me would not be able to afford to live in our own hometown. Even though student life was about as easy as the majority of happy retirees. I knew I already had it good, and expected to always continue to run a business of some kind when I got to be a senior citizen, and never stop. There were really so many more examples of diverse old-timers than any other place I am aware of.

>I like programing and working on projects, I hate filing TPS reports all day and never ending meetings.

I actually do like programming too or I wouldn't have done it at all. I started early and have done some pioneering work, but never was in a software company. There was just not many people who could do the programming everywhere it was needed as computerization proliferated in petrochemicals. Now there's all kinds of commercial software and all I have to do is "just" tie up the loose ends if I want to. I mainly did much more complete things on my own, and the way I wanted to. Still only when needed, and not every year. In my business I earned money by using my own code, not selling it at all.

I know what you mean about never ending BS, big corporate industrial bureaucracy was challenging enough to survive around as a contractor, I don't think I could tolerate "lack of progress" reports or frequent pointless meetings for code on top of that, especially when I'm trying to keep my nose to the grindstone and really get something worthwhile accomplished :)


I actually think I'm trying Just to get more out of the electronics where others can't match what I had decades ago.to get to where your at.

I like programming. I want to start a company and hire smart people.

But I don't want that to be my main means of support.

>Just to get more out of the electronics where others can't match what I had decades ago.

I'm forced to assume you have a particular niche here.

I hope to be able to write code as long as I'm here, but I want it to be a hobby when I'm old.

Hopefully the hobby includes collaborations with others. A lot of people have vanity wine shops and book stores which lose money, I want a vanity game studio ( maybe music production software too).


Yup, just like my dad built his own house, and I have to call a plumber/electrician.

I can do SOME things, but for more advanced, I need to call a professional.

Coincidently the plumber/electrician always complains about the work done by the person before him/her. Kinda like I do when I need to fix someone else's code.


I mean seriously is this the prediction folks are going with? Ok so we can build something like our SOTA coding agents today, breathing life into these things that 3 years ago were laughable science fiction, and your prediction is it will be worse from here on out? Do you realize coding is a verifiable domain which means we don’t technically even need any human data to improve these models? Like in your movie of 2050 everyone’s throwing their hands up “oh no we made them dumber because people don’t need to take 8 years of school and industry experience to build a good UI and industry best practice backend infrastructure”. I guess we can all predict what we want but my god

That's an INCREDIBLY good point about synthetic training data. During model training, AI agents could pretty much start their own coding projects, based on AI-generated wish-lists of features, and verify progress on their own. This leads to limitless training data in the area of coding.

Coding might be cooked.


> breathing life into these things that 3 years ago were laughable science fiction

LLMs were not fiction three years ago. Bidirectional text encoders are over a decade old.


Coding agents is what I’m talking about, they are also an old idea, everything is an old idea, what is new and a major step change is the realized capability of them in December 2025.

You are the first person I've ever heard call Dec 2025 a "major step change" moment in AI. And I've been following this space since BERT.

No, I’m saying AS OF Dec 2025. 2025 itself being a step change in that coding agent adoption has undergone a step change as a result of model quality and agent interface being good enough.

Understood, but I still think you're exaggerating. Tool use is a 2024 thing, and progress on model quality this year has been downright vomit-inducing (looking at you, OpenAI...)

What would you have expected model quality to have been this year, it’s greatly exceeded my expectations, I’m genuinely confused by this perspective…considering where we were a very very short time ago

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: