Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I often end up giving the opposite unpopular opinion: RPis are overkill for a lot of DIY IoT uses. Want to have something that opens your curtains or flashes some RGB lights in your hallway or whatever? Pick yourself up an Arduino / ESP32 with built-in WiFi, often an Ethernet port, tons of GPIO. It consumes milliwatts of power, boots in under a second, and is cheap enough to be disposable.


This combined with the main article shows the exact reason why Raspberry Pi is as popular as it is. When I'm starting a new project I can either worry about Arduino or ESP32 or other microcontrollers or various NUCs or getting a board fabricated from scratch...or I can grab a RPi from the drawer and know that it will just work, regardless of whether I need to toggle some RGB lights, browse the web, or run a Kubernetes cluster.

There is nothing else out there that hits the sweet spot in between price, power consumption, processing speed, extensibility, software compatibility, out-of-box experience and lots more.


I guess that's true for a certain wealth target. If you consider the Pi as the sweet spot for price, have at it.

But https://pine64.com/product/pinecone-bl602-evaluation-board/ costs $4. It fits a different sweet spot for price, power consumption, etc. A drawer-full of 20 Pis could run me $2000. A drawer-full of Pinenuts would run me... well perhaps $2000 because I could fit 1000 of them in a drawer.

If I want to browse the web, PineTab probably beats Pi. And that's only considering a single vendor.

Not knocking Pi. If it works for you, that's fine with me.


That’s apples to oranges, more comparable would be a pi pico w, which is like $6, has better documentation, and is more likely to be available when you need one.

I have a bunch of pine devices, but if you think raspberry pi’s are difficult to come by, finding a pine device in stock over the last few years has been a challenge at least for me personally. Things have started to get much better though, it seems.


Not only are Pine hard to find stock of, but I've basically never heard of a well-built Pine64 product, and that's coming from someone with three of them. They're all buggy, flimsy, doorstops at this point. Whereas my biggest complaints about Pis are that (1) I broke a plastic case for one of mine while it was in a moving box, oops, (2) SD cards are a pain in the butt and fragile.


The smaller than a finger nail size ”user unfriendly SD card“ is replaceable with USB memory as boot device.


A 4GB Raspberry Pi 4 costs... $169 USD actively on Amazon (https://www.amazon.com/Raspberry-Model-2019-Quad-Bluetooth/d...).

I can find more powerful laptops for cheaper. I can get a Dell R720 which has drastically more computing power (32x RAM, much more powerful CPU) for twice the price.

RPI used to be economical. If it still cost $50 USD, I'd grab them in a heartbeat to do these types of projects, but as it is, on price, they are almost Apple levels of overpriced (if not more).


The RPi starts at $35. The prices you’re seeing on Amazon are scalpers takings advantage of the chip shortage. When Adafruit, for example, has them in stock, they’re at MSRP.


If you can't order a dozen any day of the week for the list price, they aren't really available at the list price.

As far as I can tell, the only Pi that you can reliably buy in quantity 1 at list price is the Pico.


I mean isn't the truth somewhere in between? You can probably get one a lot cheaper than $150 or whatever it's selling for on Amazon if you don't need it today. At the same time you're right that it means something that you can't just go out to the store and buy one at anything remotely close to MSRP


Pi Zero 2 costs $15. Pico series starts from $4. There are plenty of cheap options if you don't need the extra power.


It is the software compability. Things working out of the box, without hassle. Almost nothing beats that. Almost everything is secondary.


I find the lifetime effort of an AVR/ESP8266/ESP32 to be lower than a Pi.

I’ve had AVRs (of mine) running in the house for 9+ years without being touched (and ESPs for over 5).

Things do just work out of the box on them (often easier than installing Raspian, getting it onto wifi, setting up ssh, looking up the commands to set a GPIO, figuring out cron, rc.d, etc.)

It’s way faster IME to just use the Arduino digital_write() functionality in setup and loop. (I’m a long-time c/c++ programmer, which helps a bit.) The exploitable footprint is way lower, so you pretty much never need to do a security patch. If the power fails, you’ll never* end with a bad volume; it just boots back from flash and resumes working.


There's a lot of stuff you can do to Linux on the Pi to make it have some of the nice qualities you're describing.

I've set up my Buildroot project to copy "authorized_keys" and "wpa_supplicant.conf" from the fat32 formatted boot partition to their normal locations. So I flash an SD card, drag and drop the files onto the SD card, plug it in the Pi and SSH right in.

With regards to filesystem corruption on power failure, you can mount the root filesystem as read-only. If you need to write to files you could mount a tmpfs volatile filesystem.


You make a lot of sense except the part about price. Despite you being right that MSRP represents a great value for the advantages, the actual cost is way out of whack for what you get with a Pi, just due to the fact that they've been in short supply and thus mediated by 'scalpers' for 3+ years now.

I can still see how for some people it is still worth it, because, say an HP thin client decidedly isn't a good substitute if you want to, say, fit it in an outlet box and run off 5v USB power. For anything where you're not using the GPIO/"hats"/whatever though, unless size is a big concern, I would use a small older computer over a Pi.

If I am doing a project with a 'hardware' component tomorrow though, I agree with you, I'd (grudgingly) overpay for a Pi rather than those other things, because, of all platforms with interface GPIO pins that you can use to do cool stuff, the Pi is the one most likely to have "support" out there -- meaning either someone else has already made a tool to do some of the things I want, or someone else has run into the problems I'll run into and prompted a discussion about how to fix it.


I think it depends on the type of project. I found the out of box experience to be super easy with Arduino. You install the IDE, plugin the board via USB and can start writing code right away.

As a full stack web developer, I am always finding myself getting bogged down in activities to support the work of development (managing dependencies, deployments, builds, etc). I don't enjoy that part of being a developer.

The experience of being able to write a few lines of C code and have stuff happen right away is very pleasant and a breath of fresh air. Unless it's necessary, I would rather not complicate things by having to deal with an OS and everything that entails.

You can get an ESP32 (Seeed Studio Xaio) Arduino-compatible board for $5 from Digikey. Incredibly cheap for hobby-scale projects!


Shhhh stop spilling our secret to the software people... The semiconductor shortage is bad enough already...

That said though, programming embedded devices like Arduino and ESP32 needs a completely different style of thinking than with high level languages or web stuff. Things like MicroPython reduces the friction just a bit to make it easy enough to get on board.


The thing that tripped me up the most was concurrent tasks. In languages like Go or JavaScript it’s pretty easy to figure out how to decrement a timer or wait until x time to perform a task without having the waiting period be blocking. On an arduino, you have a few options but none of them are likely to be familiar to high level software devs.

It isn’t rocket science, but there are loads of little details like that which will make you pause and then write loads of bugs and awful software before you finally figure it out. Meanwhile, accomplishing the same thing with a high level language might be trivial.

It can be discouraging but I’ve come to love it. You learn a lot, and having a physical board doing a tangible thing with actuators and sensors can be really gratifying.


RTOSes are developed to solve this exact problem of concurrency. Try Zephyr, ThreadX (from MS) or FreeRTOS (from Amazon) or whichever OS your microcontroller vendor supports. It will be eye opening, and you get to learn to write safe code in C with synchronization primitives just like our ancestors.


Amazon did not create FreeRTOS; they simply took over maintenance a few years back. I didn’t know that happened until I fact-checked your comment, but it is enough to make me never want to use it ever again. And that makes me very sad.


They improved the licensing to MIT. The pre-acquisition license was "GPL" but it had a noncompliant restriction prohibiting comparative benchmarks. There have also been useful improvements to the codebase. Amazon is a net positive in this case.


FreeRTOS is now MIT licensed, and the primary developer is actually getting paid to work on FreeRTOS.

This can only improve FreeRTOS and the embedded ecosystem in general.

And, if Amazon becomes a problem, people can fork and bail.


ESP32 running with Arduino framework comes with FreeRTOS out of the box.


>just like our ancestors.

why am I laughing at how much this stings? we're not that old, damnit!


I saw a tee shirt the other day that said "It sucks to be the same age as old people". We are that old. :-)


Ha, I'd never seen FreeRTOS – thanks! It looks awesome.


Many microcontrolers have timers and counters or even state machines (PIOs on RP2040 is a nice example of that) for this reason. They allow you to handle those cases out of the main computing unit. The MSP430s are also full of these magic things. They are harder to get introduced to than just writing Python. But you don't realize how powerful those things are until you step out of how you were handling things on a multi-GHz computer with ton of RAM.


And these peripherals can run at hundreds of MHz doing real work in every cycle. I can probably do very low latency audio processing with an interrupt firing at the sample rate. Delay is a few samples instead of at least couple of ms like on a PC with an audio interface.


You probably could, but I’d recommend getting a chip with a dedicated I2S peripheral and use freertos with a high priority dedicated audio task that’s sole task is processing audio. You really don’t want to miss an audio sample. It’s very audible, and depending on what you’re doing makes audio processing essentially a hard real-time constraint.


FreeRTOS has a massive effect on throughput. If your audioprocessing is predictible and your interrupts too there is no reason to need that overhead.


Yes working on a Arm M7 at 480MHz I get 1 to 3 samples latency on most things I'm doing (mostly synthesis). And that's on a single core even.


I recently used Rust (RTIC) for that and it was relatively pleasant experience, it had "tasks" (that cleverly used unused interrupt handlers for context switching so it was nice and efficient) that could be triggered by various stuff, not unlike threads, and they had priorities. As long as you busy waited only in the main, lowest priority one, it "just worked"


Not having high-level languages available is a big part of the fun of programming microcontrollers.


I completely agree. It brings you back to more fundamental aspects of problem solving that sometimes you miss with higher level programming. There are a lot of things I can do on a modern web server with a language like Go that are pretty cool, but not particularly interesting because it's kind of trivial. There are so many resources available to the program, things to fall back on for resilience, plenty of common problems are ironed out into popular libraries, etc.

Trying to automate something truly reliable and consistently with a microcontroller on the other hand can be simultaneously soul crushing and exciting — there are so many edge cases and challenging problems.

Sometimes I'll spend hours trying to figure out how to interface with a single sensor, and while it isn't important or impressive in the scheme of things, I really enjoy it.


> Sometimes I'll spend hours trying to figure out how to interface with a single sensor, and while it isn't important or impressive in the scheme of things, I really enjoy it.

I found a fake sensor like that once.... was wondering why my code done by datasheet didn't worked on cheapo breakout board I bought off aliexpress.

Then I read ID register and they used older chip that had some of the stuff set up differently...


Same, I bought a couple waterproof temperature probes under the name of a reputable manufacturer and wound up getting what turns out to be notorious knock offs. It was way harder to set up and the actual sensors measured within several degrees C of each other, haha. I'm a lot more careful of where I order components from, now.


You can always run Espruino on your ESP32 -- a nodejs like environment for microcontroller. It works well and has lots of drivers for different components.


While we're on the general topic, let me share this Gist: https://gist.github.com/phkahler/1ddddb79fc57072c4269fdd6716...

fade a GPIO LED on/off cyclically in 6 lines of code. It should read "every millisecond or faster" but OK.


Using Rust and esp-idf you can simply call the standard library threading facilities if you want to do concurrency, and it's all handled in the background by FreeRTOS for you.


Realistically you will need to use an RTOS for threads and scheduling on a micro.


This is so true! I just programmed an Arduino for a diy attic fan controller. Was it hard? Not really. Was it a bigger pain in the butt than programming for home assistant? Oh gosh yes. And quite limited in comparison. It really made me appreciate zWave plus python plus home assistant plus appdaemon. High level coding - use it if you can!


I did use esphome for something similar. It uses a yaml file to generate C++ code for you and it works like a charm (in 99% of the cases). Very easy integration in home assistant and it has support for the usual home automation peripherals.

https://esphome.io


Honestly, working with all the restrictions of an embedded system like this is something that all programmers should experience at some point. Being forced to actually consider how you're using resources teaches you a lot about how to write efficient code. Or at least if gives you a better understanding of what the machine is actually doing with your code.


Not all embedded systems are that limited. Check the specs of some of the latest flagship phones.


You chose to ignore the like this your parent mentioned ;)

And I agree. All programmers should at some point work on a super slow, super limited AVR or something like that. Something where you have to make hard decisions either because you don't have enough RAM, not enough CPU power, not enough storage, not enough I/O pins etc.

Even non flagship phones are more powerful than the boxes running Windows Embedded 20 years ago.


I once ran a project on an Attiny. That bad boy has 1KiB of flash and 128 Bytes of RAM at an unthinkable 4/8MHz.

I can't begin to tell you how incredibly annoying it was, but it taught me a lot. I know a lot of people who would be completely dumbfounded at 128B of memory. That's only four int32!


Anyone comfortable with using high level languages on an Amstrad PC1512 will do just fine.

Many still don't grasp how powerful ESP32 actually happens to be.


To get on-motherboard, one might even say.


And most importantly, it doesn't have an OS that provides next to no benefit for many of those tasks, and suffers from inconsistent timing.

Something like ESP32 is much more reliable at controlling hardware. It will never miss on a triggered limit switch in time because memory ran out for some reason and it started swapping.


Ah, that kind of timing. I have made several kinds of clocks with RPis, one that ran fine for years driving a little servo to strike an hourly chime like a grandfather clock, which was all very easy thanks to having an OS with NTP and cron.

But I mostly agree with the article, I have a Gigabyte Brix fanless NUC-alike as my real home server, and a couple of Pis doing little things (and switched to 'overlay file system' so running only from memory and not writing to those frail SD cards).


> very easy thanks to having an OS with NTP and cron.

An ESP32 can still have both of those things in some capacity.

https://randomnerdtutorials.com/esp32-ntp-client-date-time-a...

https://github.com/DavidMora/esp_cron


The point the GP made was that timing issues weren't a real problem on the rpi thanks to the tools mentioned.


There are different kinds of timing issues.

There's knowing the time, which you can do with something like NTP. That the RPI can manage just fine.

And there's acting with precise timing, eg, if you need to control a mechanism and reliably react on a deadline of a few ms. A RPI doesn't perform well there, which is why 3D printers use microcontrollers instead.


Bare metal C++ for PI. https://github.com/rsta2/circle

Access to most of the hardware and real-time deterministic behavior. It’s a really great project and lets you twiddle those gpio pins at ridiculous speeds with perfect timing (less than a millisecond).

A PI comes with a whole bunch of great hardware baked in, so if you have one laying around, and want to do some microcontroller stuff, I think it’s a great choice.


It's still an A-profile MPU and not an R- or M-profile MCU, and while it will be fast it will have less deterministic behaviour than we might like. If you disable the caches and MMU you'll get better consistency. But wouldn't we expect ~microsecond accuracy from a properly-configured MCU?; ~millisecond accuracy is not a particularly high bar.


You can read pins (well one) with sub-microsecond latency using the Fast Interrupt Request, but I have not tried this myself. I think a PI would be more than capable of matching most microcontrollers just due to its very fast clock speed. Add multiple cores with the PI4 and you get a crazy amount of compute between each pulse as well.

There are a bunch of clocks that run plenty fast to enable high resolution timing as well.


The high clock speed and multiple cores are great. It's definitely a beefy system. But this is completely orthogonal to timing accuracy and consistency. Speed does not make it more consistent. Tiny low power MCUs have much more accurate and consistent timing.

Low latency can be a good thing, but it's also not related to consistency, particularly when you start looking at what the worst-case scenario can be.


So I am the opposite of an expert here, but I don’t follow. If I have control over the interrupts (which I do) and I have high precision timers (which I have), why can I not drive a gpio pin high for X microseconds accurately? What’s going to stuff it up?


As I mentioned in the previous reply, the CPU caches and the MMU to begin with. You're probably running your application from SDRAM and an SD card. The caches and page tables result in nondeterminism, because the timing depends upon existing cache state, and how long it takes to do a lookup in the page tables. And as soon as you have multiple cores, the cache coherency requirements can cause further subtle effects. This is why MCUs run from SRAM and internal FLASH with XIP, and use an MPU. It can give you cycle-accurate determinism.

The A-profile cores are for throughput and speed, not for accurate or consistent timing. However, you can disable both the cache and the MMU, if you want to, which will get you much closer to the behaviour of a typical M-profile core, modulo the use of SRAM and XIP. If you're running bare metal with your own interrupt handlers, you should get good results, excepting for the above caveats, but I don't think you'll be able to get as accurate and consistent results as you would be above to achieve with an MCU. But I would love to be proven wrong.

While most of my bare metal experience has been with ST and Nordic parts, I've recently started playing around with a Zynq 7000 FPGA which contains two A9 A-profile cores and GIC. It's a bit more specialised than the MPU since you need to define the AXI buses and peripherals in the FPGA fabric yourself, but it has the same type of interrupt controller and MMU. It will be interesting to profile it and see how comparable it is to the RPi in practice.


This is something that could only really be proven by actually testing and I don’t have a fast enough scope to really prove things.

Having said that, I think some of the concerns have fairly simple mitigations. Because of the high clock speed, I can’t see that disabling cache and MMU is required. The maximum “stall times” from either of these components should still fall well below what would be needed. It’s bounded non determinism. That’s completely different to running things under Linux.

Secondly, having multiple cores allows for offloading non-deterministic operations. The primary core can be used for real-time, while still allowing non-deterministic operations on others. The only thing to consider is maximum possible time for synchronization (for which there are some helpful tools).

As I said, I’m far from an expert. It was close to 20 years ago when I last did embedded development for a job, and I was a junior back then anyway. Still, I’d be interested to know if you think I’m way off beam.


I think you're pretty much correct. Whether these details matter is entirely application-specific, but you can go the extra mile if your application requirements demand it.

There are certainly multi-core MPUs and MCUs with a mixture of cores. The i.MX series from NXP have multi-core A7s with an M4 core for realtime use. Some of the ST H7 MCUs have dual M7 and M4 cores for partitioning tasks. There are plenty of others as well, these are the ones I've used in the past and present.


A few ms? In my experience that seems well within the capabilities of Linux. I guess last time I measured wasn't on a Raspberry Pi. I'm kinda tempted to take a shot at profiling this and writing up a blog post since it seems like a useful topic, although it will probably be a few months until I can get around to it.


I think milliseconds overestimates by a few orders of magnitude, but non-real-time OSs really suffer for the intermediate IO stuff you expect in an embedded project (e.g. SPI, I2C, etc)

A long time ago, I was playing with Project Nerves on an Orange Pi running some flavor of debian. I was doing some I2C transaction (at 400 kHz, each bit is single-digit microseconds), and I ultimately had to have a re-attempt loop because the transaction would fail so often. I found a failure cutoff of 5 attempts was sufficient to keep going. I don't recall the failure rate, but basically, whenever a transaction failed, I'd have to reattempt 2-3 times before it eventually succeeded.

Meanwhile, on a bog-standard Arduino with an ATMega328P, I send the I2C traffic once, and unless the circuit is physically damaged, the transaction will succeed.


No, the consistency of the timing is terrible on Linux.

Seriously, stick a scope or logic analyser on e.g. an I2C line and look at the timing consistency. Even on specialised kernels for realtime use, you can have variable timing delays between each transaction on the bus. And this is all in-kernel stuff that's inconsistent--it looks like it's getting pre-empted during a single I2C_RDWR transaction between receipt of one response and sending of the next message. The actual transmission timing under control of the hardware peripheral is really tight, but the inter-transmission delays are all over the place. Compare it with an MCU where the timing is consistent and accurate, and it's night and day.


The parent comment says

> control a mechanism and reliably react on a deadline of a few ms

I actually did measure this with an oscilloscope on embedded Linux (not a raspberry pi). A PPS signal was fed into Linux, and in response to the interrupt Linux sent a tune command to a radio. Tuning the radio itself had some unknown latency.

End-to-end, including the unknown latency of tuning the radio, I never observed a latency that would even round to 1 ms. That's unpatched and untuned Linux, no PREEMPT_RT. I didn't dig any further because it met our definition of "reliable" and was well, well within our timing budget.

I'll be the first to admit it wasn't some kind of rigorous test, just a casual characterization. I would not suggest anyone use Linux for a pacemaker, airplane flight controller, etc.

This is making me itch to buy an oscilloscope and run some more thorough tests. I'd like to see how PREEMPT_RT, loading, etc changes things.


My profiling was on an NXP i.MX8 MPU, which is a A-profile quad core SOC very similar to an RPi. I think it was with a PREEMPT_RT kernel, but I can't guarantee that, but I was fairly shocked at the lack of consistency in I2C timing when doing fairly trivial tasks (e.g. a readout of an EEPROM in a single I2C_RDWR request). You wouldn't see this when doing the equivalent on an M-profile MCU with a bare metal application or an RTOS.

What is acceptable does of course depend upon the requirements of your application, and for many applications Linux is perfectly acceptable. However, for stricter requirements Linux can be a completely inappropriate choice, as can A-profile cores. They are not designed or intended for this type of use.

Profiling this stuff is a really interesting challenge, particularly statistical analysis of all of the collected data to compare different systems or scenarios. I've seen some really interesting behaviours on Linux when it comes to the worst-case timings, and they can occasionally be shockingly bad.


I was referring to that yes, even if Linux performs well in the ideal case, it's not necessarily reliable, and the possible problems are hard to compensate for.

Eg, your process can randomly get stuck because something in the background is checking for updates and IO is being much slower than usual, or the system ran out of RAM and everything got bogged down by swap.

On a microcontroller you just don't have anything else running, so those risks don't exist. Eg, a 3D printer controls a MOSFET to enable/disable the heaters. The system can overheat and actually catch on fire if something makes the software get bogged down badly enough. On a Linux system there's a whole bunch of stuff that can go wrong, most of which is completely outside the software you actually wanted to run.


I guess I feel like things are a bit tangled up here.

Sure, a single purpose MCU controlling a heater MOSFET has a lot fewer failure modes than a Linux device doing the same.

I don't dispute there are a lot fewer ways it's even possible for that system to misbehave.

The original comment was recommending ESP32s over Raspberry Pis for DIY projects like opening your curtains or flashing LEDs. The ESP IDF runs on FreeRTOS, so we're already moving away from the bulletproof single task MCU. People will almost certainly be adding some custom rolled HTTP webserver on top. They might be leaking memory all over the place, there are probably all kinds of interrupts they have no idea about firing off in the background. I wouldn't trust an ESP32 curtain-bot not to strangle me any more than I'd trust a Raspberry Pi based one.

Your example about running out of RAM seems just as relevant to MCUs. You can leak memory and crash an MCU. You can overload an MCU with tasks and degrade performance. You can use cgroups or ulimit to help prevent a bad process from bringing Linux down.

I agree that Linux is not going to be as reliable as going baremetal, and I'm not recommending you use it as a motor controller. But even the most reliable MCU can fail. An MCU can get hit by cosmic rays or ESD. People might spill water on the 3d printer or physically damage it. It's not even a binary "works right or dies" thing. I've voltage glitched MCUs to get them to skip instructions and get into an unanticipated state.

In any case, the best path to safety is to imagine that the computer might be taken over by Skynet and do everything in its power to kill you. Or worse, ruin your print. If safety is the goal it's probably best to achieve through requiring the computer system to take some positive action to keep the heater on. Or even better, a feedback safety mechanism like a thermal fuse.


Being within the capabilities of something and guaranteeing that it will never exceed that are two different things. At least in the past real time guarantees for Linux came as part of an optional patch set for the kernel since guaranteeing that an algorithm would complete within a set time frame or that things like priority inversion issues would be handled correctly came with a performance cost.


I think we are talking about things like interrupt latency, not NTP synchronization.

MCU interrupt latency can be extremely deterministic. I ran some measurements for work and found Linux to be adequate for many uses, but it is a valid concern. There are some Linux kernel patches like PREEMPT_RT that attempt to bound Linux latencies, but generally MCUs are a lot better suited if latency is critical. In part because they just have less software running on them to interfere with timing.


That's not the kind of timing the original point was talking about AFAICT. Real time response is the issue with regular Linux, not vaguely accurate wall time.


I can't really disagree with what you're saying about Pis often being overkill, but I've been using Raspberry Pi Zero Ws for more projects where I might have used ESP32s, and I've been very happy with the choice. Basically any project I have that isn't battery powered or timing critical, I'd prefer to use a Pi.

Zero Ws have a $10 MSRP (of course, huge shortage at the moment). I think they're pretty cost competitive for DIY IoT.

Buildroot makes it really easy to create a custom Linux OS with your software preinstalled, any kind of custom kernel tweaks you want, and an impressive amount of software packages available. If you strip unneeded functionality from your kernel you can boot a lot faster too.

Here's a list of some stuff I like about a Pi Zero W vs an ESP32

* Ease of programming. Flash an SD card and swap it out, without having to hook the device up to a programmer.

* Extremely solid TCP/IP stack.

* Multitasking with real process isolation.

* Program organization (related to above). I find the OS abstraction very useful for enforcing cleaner designs.

* Access to Linux software packages. I can easily add nginx, apache, or lighttpd to my rootfs. It doesn't involve mangling any of my other software packages

* Interactive access. I can debug the applications by sshing into the Pi and looking at logs. I can scp new files onto the Pi.


"huge shortage at the moment"

AFAIK this "at the moment" period has now extended all the way from the time they were introduced to the present day. One long moment for sure.


My recollection is shit really hit the fan after the pandemic? It looks like the Zero W was released in 2017. I remember up until the last couple years, you could at least get one per order from Adafruit/Sparkfun at MSRP. I think there may have even been a time when you could get like, 10 per order or something?


Strongly agree here. Local SD storage IO aside, a Pi 4B 8GB is basically on par with a high end desktop I had in the mid-2000s.

That’s an insane amount of compute on something that can be sub-2W over POE, or 1.3W on WiFi.

Though for all the homelabbers out there, you probably have a NAS. Use Log2RAM, and present some ISCSI volumes to the Pi, and you’d be staggered with the very real work a Pi can do when not saddled by the SD card - without having to directly attach storage.

- -

I’d argue that for “IoT” stuff even Arduinos are overkill in terms of computing power. Granted, I realize this is the case functionally because of BOM optimization and making it forgiving (more power than needed) for beginners.

- -

Granted, going back to the original article: Yes, 1L form factors are nuts. Mac Minis are insane (but not cheap), and if you look at 35W Zen 3 Ryzen 7 PROs you can get similarly insane power cheaper if you need x86. But all the much older former office 1Ls are everywhere and offer ludicrous performance to hobbyists for pennies on the dollar, with (as mentioned by the article) sub-10W idle.

- -

Honestly, it’s just a great time to be a tinkerer. We’re drowning in ubiquitous, cheap compute.


Distro/stack recommendations for a Pi? Intrigued by your usage with Log2RAM and ISCSI volumes!


Personally I've standardized on "DietPi" which is functionally a super-stripped down Debian.


I don't see it mentioned in the thread, so I'll link to ESPHome. It is an incredible platform when you just want to read some sensors or put some relays or switches onto the network, integrates with Home Assistant. One of the most enjoyable firmware installation procedures I've ever encountered.

https://esphome.io

Most of the use cases I've seen have them report to Home Assistant or something similar, but I think some people use them directly without a host.


Well, it is right-sized to run a hub controlling all that little devices.

But yeah, especially now with various ESP32 firmwares like ESPHome you can essentially just make a YAML with specification on where to listen and what bit to flip and get simple switch/controller with zero actual coding.

There is even custom firmware to turn off-the-shelf IOT devices to work with "open" standards.


I've been playing with home automation and I agree. This guy is creating a mini AWS at his place while I am using a small Pi to run my Lua server[1] that takes less than 2MB of RAM and can run even on a microcontroller.... if anything the Pi is totally overkill already, but it's nice to be on GNU/Linux and have access to so many utilities, even a browser.

[1] https://realtimelogic.com/ba/doc/


> Want to have something that opens your curtains or flashes some RGB lights in your hallway or whatever? Pick yourself up an Arduino / ESP32

The reason I'd go for an ESP32 for this use case isn't because RPis are overkill, but rather because it's much easier to write, compile, flash, and run code on bare metal on an ESP32 that has access to Bluetooth and Wi-Fi but isn't vulnerable to file system corruption. You can do this on an RPi, it's just much harder.


ESP32-C3 is cool if you want to play with RISC-V for cheap.


Even in the Raspberry ecosystem you can easily pick up a Pico, which offers all of that.


I prefer the Pico to the ESP32. I'm very impressed with the quality and organization of the Pico SDK and documentation.

Although I have heard the Pico is not very competitive in terms of power optimization. I haven't done many battery powered projects to encounter these issues though.


Even easier, in a way... a Parallax Propeller and program in SPIN (their proprietary structured BASIC-like language), or C++ etc.

Exotic, but simpler mental model for most people. 8 independent cores and a simple non-interrupt driven programming model for doing basic timing and I/O with any of the 32 pins configurable for either input or output.

And super easy to interface both 3v and 5v, available in DIP, and in various easy format boards.


Yes. I followed a tutorial for building a doorbell that uses an ESP32 to send a notification to our phones when the button is pushed. I haven’t touched it since installation and I’m always a bit amazed it still works. An RPi would’ve been overkill.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: