Yes, the embedded space pays terrible, and the employers don't seem great on the whole. When I was at Google I got to work on embedded stuff and really liked it; but I was getting a Google salary. When I left Google I pursued IoT and embedded jobs a bit and while I was not expecting Google level compensation at all, I was astounded at what was going on there, pay wise. General software eng back end jobs pay better.
The problem is I really like writing C++ (and Rust, etc.)! So I'm cultivating other "systems programming" type career paths; DB internals, have always fascinated me, so I'm trying that direction. Working fulltime in Rust now, but it's hard to find work there that isn't crypto-tainted.
Other people have pointed out that lower pay in embedded has to do with the influence of EE salaries. Which are sadly lower than they rightfully should be.
I did a few a few months of contracting at a major voting machine company. They make a significant portion of all US voting machines. They had 4 developer teams Firmware (C++ where I was), UI (web tech on a voting machine), poll book (java), and a web/support team. Before I was hired in a massive influx of contractors each team was something like 3~5 people, except UI which was a new team with the contractor hiring spree.
After the work was done, they shed nearly all the contractors and about half of their previous full time employees. Just quadrupled their staff to make a voting machine then fired them all.
They hired me as an "Embedded Software" on their Firmware team. It was a total shitshow we didn't have unit tests or CI. The new hires insisted on it and I spent a bunch of time maintaining a Jenkins setup for the team that really helped.
The pay wasn't great, a little less than defense contracting, which was a little less than insurance companies and slow finance companies.
If that is what most embedded development is like then I see why it is brings the average down.
Well the bug reports were like: "I clicked around and the UI froze/crashed"… no info on how to reproduce, no logs, nothing. Just that bit of information.
When was that? I am so glad that for the past 5~6 years every contract I have worked has had unit tests and for the past 10~12 every place has at least accepted their value.
The last time I actually had to argue for unit tests was in defense contracting and not for the team I was working on. Some idiot at a lunch-and-learn community thing tried to claim there was no short term gain from them and we had defined short term in months. He could not believe that unit tests can help the developer writing them and the help the team the very next sprint.
I hope he learned better or got forced out of tech.
I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline. If you couldn't add a service level test that got your pr coverage, then you were referred to yagni and told to stop doing stuff that wasn't part of your task. I was ok with that, it worked well, and the tests were both easier and faster to write. If the services had been too large maybe it would have fallen apart?
I have also worked on codebases where there were only tests at the level of collections of services. Those took too long to run to be useful. I want to push and see if I broke anything, not wait hours and hours. If a full system level test could complete in a few minutes I think I would be fine with that too. The key is checking your coverage to know you are testing stuff, and to write tests that matter. Coverage can be an antimetric.
> I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline.
Sounds ideal to me. Add testing where it is cheap enough to justify, and maybe just a little more than you think you really need because they pay off so often.
If your mocks and fixtures get too big you might not be testing the code in question meaningfully.
Coverage and test quality/quantity need to scale with the associated risk. Perhaps "Flight planner for Air Force Weather" gets more testing than "Video game User Interface" and that's ok. Even in gaming, the engine team should have more tests than the game teams.
Yeah, but in real life scenarios, the difference in actual numbers, as opposed to percentages, matters.
Let's imagine that the split for all software shops is 80/20, with 80% being crappy, and 20% being decent. If there are 10 embedded software shops out there, it means there are only 2 decent embedded shops out there that an engineer can work at. Meanwhile, if there are 1000 non-embedded software shops, it means that there are 200 decent shops an engineer can work at.
This creates a wild disparity, even if the ratio of crappy to decent is exactly the same for all software shops in general.
The 20% decent shops are retaining their engineers and only growing at a sustainable rate. Available new jobs are filled with a referral since every employee is constantly bragging to their friends. So they post few / no new jobs online.
The 80% crappy shops are shedding employees (turnover) and also poorly managed so they fire everyone and rehire later. Only the worst employees decide to remain during such a purge. So most new posted jobs (more than 80%) are for such companies.
Then the 80% crappy companies talk about their issues finding staff and you get articles complaining how hard it is to find XYZ employees (interns, C++, even supermarket staff). But the real problem is the company in question, not the industry as a whole.
In real-life, engineers aren't just cogs in a wheel that are interchangeable, who can seek work in any organization. There is also a smaller number of people who can/want to do systems level/embedded programming.
Yes, I agree with you. Which is why I explained that despite the overall ratio of crappy/decent shops might be the same for all software work areas, embedded devs are the ones who get the short straw.
Just another project manager trying to hire enough people to make the project happen on time. I am in another one of those situation right now. Nothing to do with anything sensitive, just a team of 9 mothers trying to make a baby in 1 month.
The code is quite secure, but the process and company are... typical processes and company people. Paper ballots and physical boxes are more secure if good practices are followed.
At one point I was tasked with shuffling the data layout on disk in real time to mitigate de-anonymization attacks. Security was real concern.
Crypto everywhere. The voted ballots were encrypted with keys generated and delivered immediately before the election. No networking by default. The end product had all the right things.
That said, no one had clearances, third party auditors were morons, and pay wasn't great. So if I were an attacker I would just try to bribe people to make the changes I want. Can't bribe a ballot box company to election tamper, because they just make boxes.
With all that effort they are still needless voting machines, they each count a few thousand votes and not all produce a physical paper trail. Because they have software and logic in them they need a constant chain of custody to make sure that the code we wrote is what is actually run.
Just use a box and paper, it is safer all the ways digital things suck. A precinct counting votes only needs to tally a few thousand ballots so it might take a team of people a hour or two, less time than to fix a potential technical problem.
And paper can more easily have bipartisan oversight and can have physical security measures that are impractical on a computer.
All that said I have no reason to believe our elections have been tampered with on a national level or that anyone other than a local republican may have used our machines to steal elections, even then no firm or even circumstantial evidence, just baseless suspicions and conspiracy theory level anomalies.
I am from Brazil. If you saw the news, the current president that just lost elections, been insisting for years, that elections here are untrustworthy.
Reason is simple: electronic voting machines with no logging, paper trail or anything. And the common people doesn't have permission to do penetration tests or read the entire source. All of it is proprietary and secretive with no public testing basically.
For years the now president, when he was still congressman, been trying to make a law where the voting machines will print the vote, and deposit on a box. This way people can count the votes printed not just trust the machine, but the government keep inventing reasons to not allow this, even when a law passed, judiciary struck it down.
Thus today people are protesting, seemly almost half of the country voted for him, the difference was tiny, they are protesting. The winner insists elections were fair, but how you prove it when the machines are proprietary and secret? How you prove it when they have no log of votes, and instead just print the totals? In a country full of corruption, and where the the mafia literally made a party to commemorate a specific person became chief election judge, how you trust nobody bribed the manufacturer or the programmers?
Most American voting machines print a ballot an let the voter review it, but not all. There have been some jurisdictions that have given up on that for reasons that seem bad and vague to me.
I think mandating that voting machines be open source is a good idea to me. Here in the US we have 3rd party auditing companies. Various US State and the Federal Government all have different testing/auditing labs that they have certified they trust. Then each voting machine company has to convince them that it is good to sell to the governments that trust them. The final build that the lab signs off on gets a cryptographic signature and the poll workers are supposed to check that it matches what they are given to run on their machines just before the setup their machines for voting.
Do Brazil have anything similar with auditors or inspectors? Or at least some crypto connecting the vendor to the polling locations?
This is really interesting. Here in Australia we still use paper ballets for the lower house of parliament. I volunteered as a “scrutineer” for one of the parties, which let me go into the warehouse where the ballots were being counted and watch. As an scrutineer, you physically look over the shoulder of the person counting votes and double check their work. You can’t touch anything, but if you disagree with the vote, you can flag it. The voting slip gets physically sent to a committee somewhere for final judgement.
I highly recommend the experience if you’re Australian - it was very cool seeing democracy in action. I personally have a lot more faith in our system of voting after seeing it in action first hand.
That said, the senate votes are all typed into a computer by the election officials. It’s just too hard to do preferential voting by hand with ~200 candidates on the ballot.
>EE salaries are sadly lower than they rightfully should be.
Profit margins of an EE will almost always be lower than profit margins of a software engineer. A team of software engineers can quickly scale to selling to millions of users (and collect nearly 100% of the resulting revenue as pure profit), whereas a team of EE's cannot a) scale their customer base as quickly, since scaling up manufacturing takes time and b) realize a profit anywhere close to 100% of revenue, since much of their revenue goes towards manufacturing and distribution costs.
In other words, the marginal cost of selling one unit of a physical product is always nonzero, whereas the marginal cost of selling one unit of software is often (very close to) zero. That differential goes towards higher salaries for the software engineer.
There are shorter term effect where for at least a generation there's been too many new grads able to design hardware I2C devices, resulting in too many new grads also able to write I2C driver software as a backup career, resulting in low pay across the board for both fields.
Just because a student likes the field, and can pass the ever more difficult filter classes along the way, doesn't mean there's a job waiting after graduation in that field. For some reason students keep signing up for an EE education even though the odds of them getting an EE job after graduation are very low. The odds of them getting any job, even a high paying one, are good because the majority of the graduating class goes into software development, mostly embedded, but most kids who can, like, bias a class-C amplifier transistor, will never have a job doing EE stuff, there's just too many EE grads for too few EE jobs.
As another example of that effect, see also K-12 education where for at least one generation, the bottom half of the graduating class was never employed in the field, at least in my state. Enrollment for K12 has absolutely cratered in recent years, and now most grads have a reasonable chance of getting a job in their field.
I understand this but I think the biggest driver for software salaries is the sheer number of companies that are interested in hiring software engineers. Plenty of hardware companies are very profitable but do not raise their salaries because there is no market pressure to do so as the more limited job market means EEs/embedded engineers do not switch companies nearly as frequently and switching companies is generally the best way to get a substantial salary increase.
Which hardware companies have SaaS margins? I think 10% margin is very good for a hardware company. A software company would aim for multiple times that.
I'm really hoping the salaries for EE type roles start to match software as the grey beards start to retire and talent becomes scarce. We've got a legion of grads going into CS, but EE classes are a fraction of that. Despite that, software roles are often more than double the salary. Any role I go into as an EE/Embedded Systems engineer, I'm more often than not the youngest by 20-30 years. I wonder how the industry in the West is going to survive it, beyond hiring contractors from India/South Asia.
Yeah same, I’m an EE camping out in software because of the pay. It’s also just easier work. I would much rather be intellectually challenged coding firmware or embedded work. I didn’t go to school to build web widgets. It’s just EE pays so badly you can’t make the bills. I was getting offered numbers that wouldn’t have afforded my own studio apartment to rent. For EE work. It’s insulting.
...which is ridiculous because of what it takes to become an EE VS what it takes to become a "web developer". Basically anyone who can handle basic logic can be a web developer if they just put in a bit of effort. Degree or not!
To become an EE you need a 4-year degree and a whole heck of a lot of knowledge about things that are a real pain in the ass for laypeople like calculating inductance, capacitance, and impedance (<shudder>).
You don't need much knowledge to make a circuit board, no. But when your boss wants to add a USB 3.0 hub to your product it suddenly becomes a, "wow, we really need an EE" job (because the spec has so many requirements and you're not going to get your product certified unless you can demonstrate that you followed it).
> Basically anyone who can handle basic logic can be a web developer if they just put in a bit of effort. Degree or not!
A "modern" web dev needs to know a whole bunch of crap nowadays. Not saying it's insanely hard but its not that easy. But sure, getting a job as a junior should be way easier than EE.
> You don't need much knowledge to make a circuit board
Not quite.
For most modern high speed designs PCB's are very far from being simple. Signal and power integrity are critical. It doesn't help that these can be "voodoo" fields where, a bit like RF, years of experience as well as the theoretical foundation are really important.
That said, I think I know where you are coming from. A ton of low-performance embedded designs these days can be done by people with very little EE education. Anyone can learn anything online. There are plenty of resources. This is a good thing, of course.
As someone who's not an EE (with no degree in anything at all) and has made many circuit boards... No, they're not that complicated. Not really.
I've even designed an analog hall effect keyboard PCB with integrated IR sensor, dual power regulators (to handle 95 ultra bright RGB LEDs), invented-by-me analog hall effect rotary encoders (incremental and absolute), and more. It wasn't rocket science.
> I've even designed an analog hall effect keyboard PCB with integrated IR sensor, dual power regulators (to handle 95 ultra bright RGB LEDs), invented-by-me analog hall effect rotary encoders (incremental and absolute), and more. It wasn't rocket science.
Sorry to burst your bubble...
Glad you learned enough to do it and had fun with it.
Yet, such PCB's are trivial to design. Heck, one could auto-route something like that and get a working board for prototyping. In fact, I have done exactly that many times over the last four decades for keyboard/control-panel boards. And auto-routers suck. The fact that one can actually use one for a PCB is a good indicator of how trivial that design might be.
One of the big differences between hobby PCB's and professional EE-driven PCB's is in manufacturing and reliability.
It's one thing to make one or a few of something, anything. Quite another to make hundreds, thousands, tens of thousands, millions. As an example, I am pretty sure you did not run your design through safety, environmental, vibration, susceptibility and emissions testing.
For an example of complex design one can look at such things as almost any dynamic RAM implementation, from SDR to DDRn. Timing, signal integrity and power integrity are a big deal and can make a massive difference in performance and reliability.
Another example is just-about any PCB used in automotive designs. They have to survive brutal power, thermal, vibration and RF environments for decades. This is not trivial.
Other fields with critical needs are medical, aerospace (which includes civilian flight) and industrial.
Consumer electronics is actually quite critical at the limit because you are dealing with very large numbers of units being manufactured. In other words, while a design for something like an industrial CNC machine might only require a few hundred or a few thousands of boards per year, in consumer electronics one can easily be in a situation where we are running 50K to 200K boards per month. Bad designs can literally sink a company.
I understand though. From the frame of reference of a hobbyist or enthusiast everything can look simple. That's pretty much because they just don't have enough knowledge or information. This means they only have access to the most superficial of constraints, which makes PCB's seem easy, maybe even trivial.
As my wife likes to say: A google search is not a substitute for my medical degree.
No, analog keyboard PCBs are not trivial at all. You have to keep a lot of things in mind when routing your analog VS digital tracks. Especially if you've got per-key RGB LEDs right next to your hall effect sensors (can be a lot of noise if you don't do it right).
Not only that but you also have to figure out how to get loads of analog sensors into a microcontroller that may only have 4 analog pins (e.g. RP2040). In a way that can be scanned fast enough for 1ms response times (again, without generating a ton of noise).
It's not so simple like an electromechanical keyboard PCB which is quite trivial.
> For an example of complex design one can look at such things as almost any dynamic RAM implementation, from SDR to DDRn. Timing, signal integrity and power integrity are a big deal and can make a massive difference in performance and reliability.
...except 99% of all PCBs aren't that complicated. You don't need to know the specifics of RF in order to design a board that controls some LEDs.
> No, analog keyboard PCBs are not trivial at all. You have to keep a lot of things in mind when routing your analog VS digital tracks. Especially if you've got per-key RGB LEDs right next to your hall effect sensors (can be a lot of noise if you don't do it right).
Sorry. This isn't meant as an insult at all. Yes, this stuff is trivial. I know it might not seem that way to you because you are not an EE. I get it. That does not make it complex. For you, maybe. Not for me or any capable EE.
Yes, having designed plenty of challenging analog products I can definitely say that analog has its own set of challenges. Designing keyboards with hall effect switches isn't in that category.
In fact, I could easily make the argument that high speed digital is actually analog design.
> You don't need to know the specifics of RF in order to design a board that controls some LEDs.
I would like to see your boards pass FCC, CE, TUV and UL certification.
Look, there's nothing wrong with being a hobbyist and having a great time designing stuff. Bravo for having learned enough to have done what you shared. That is definitely something to admire. Just understand that your experience does not give you the ability to fully grasp professional EE reality.
I don't really see why you would create a keyboard in this way.
> ...except 99% of all PCBs aren't that complicated. You don't need to know the specifics of RF in order to design a board that controls some LEDs.
There is a difference between creating something that works, which is easy enough to do, and creating something that is competitive on the consumer market, i.e. that BARELY works. This is the difference and why you would pay an EE to do this job.
Honestly all of that sounds like it maps pretty well to programming.
I sometimes run little 30 minute programming workshops where I teach people enough of the basics that they can walk away with something they’ve made. Give a novice 3 months to go through an bootcamp and they can become a half useful programmer.
But the “other half” of their knowledge will take a lifetime to learn. In just the last 2 weeks my job has involved: crypto algorithms, security threat modelling, distributed systems design, network protocols, binary serialisation, Async vs sync design choices, algorithmic optimization and CRDTs.
It’s easy enough to be a “good enough” programmer with a few months of study. But it takes a lifetime of work if you want to be an all terrain developer.
> Honestly all of that sounds like it maps pretty well to programming.
Yes, definitely. And, BTW, this also means that lots of useful work can be done without necessarily having golden credentials.
Here's where I see a huge difference between hardware and software at scale (I have been doing so for 40 years): Hardware, again, at scale, represents a serious financial and technical commitment at the point of release. Software gives you the ability to release a minimum-viable-product that mostly works and issues fixes or updates as often as needed.
If we imagine a world where v1.0 of a piece of software must work 100% correct and have a useful service life of, say, ten or twenty years, we come close to the kind of commitment real electronics design requires. You have to get it right or the company is out of business. Not so with most software products, be it embedded, desktop, industrial or web.
If I go back to the late 80's, I remember releasing a small electronic product that gave us tons of problems. The design went through extensive testing --or so I thought-- and yet, contact with actual users managed to reveal problems. I had to accelerate the next-generation design, put it through more extensive testing and release it. We had to replace hundreds of the first generation units for free because we felt it did not represent what we wanted to deliver. This is where knowledge and experience can be invaluable.
I design the majority of the electronics for my company and pretty much all the firmware as well.
Wages are not bad for the area i'm in, which is fairly rural, but could be a lot better for the work involved. Move to a big city would probably help but I like the quieter lifestyle.
I've not done any web development full time for close 20 years, first started writing JSP code. Dabbled with a few personal website designs since then. I'm sure if I went back to web development, it may pay more but I don't think it would have the same level of job satisfaction for me. I try to keep up to date on some of the technologies used but it seems overwhelming from the outside.
Part is resistence to change, but I do find the work for the most part enjoyable so it's a risk to change jobs as well.
The demand for EE roles is far less than the demand for Software roles.
For a simple thought experiment, imagine if you could get a good developer for $20 an hour. Every single company on the planet, from a mom and pop shop to big corporations could turn a profit off their work.
Now imagine you could get an electrical engineer for the same price. What percent of businesses could profit from electrical engineering? 2%?
My point wasn't about demand though. I'm well aware it flags behind SW companies by a staggering margin. A small team of SE's with enough money to buy some laptops between them can create multi-million dollars worth of value in a few years. It would take a team of EEs 5x the time and 25x the initial investment to create the same. Of course there are going to be 100's of SE companies for every EE one.
My comment was regarding supply. EE is an art that blossomed in the 80s and 90s in terms of practicing engineers, and has shrunk per capita since. This is largely driven by kids getting drawn into SWE over EE as people look at salaries and modern day billionaires, and figure it to be a no-brainer. Today EEs are a small fraction of the total engineering disciplines, despite being essential for the communication, power generation, distribution, consumer electronics, aerospace, automotive, and of course, the computer hardware industry on which the software one is built; amongst many other growing sectors like robotics, medical, and IoT.
If there are a legion of EEs are set to retire in the next 5-10 years, and all the would-be EEs are now designing web apps, surely at some point the supply/demand scales start to tip one way? Many of the above industries are abstracting everything to software platforms as time goes on, but no amount of money can make a SW dev design a power-train for a car, antenna for a 5G device, or program an FPGA for silicon verification.
Bear in mind, though, that a lot of those EEs going into software are doing so not because they love software, but because they can't find EE jobs. Sure, many are no doubt doing it for the money, but if they really wanted to be programmers, they'd have majored in CS.
The context OP setup was “when grey beards retire.”
The ideas being demand is low as the senior EEs stay put.
Mom and pop shops could use Excel and did successfully for years. Big banks even ran on gigabyte sized Excel sheets before the 2010s hype bubble (Source: direct experience working in fintech 2010-2015)
Anyone in tech believing the last 10-15 years was about anything but the US government juicing its economy to stay relevant, titillate, and ingratiate itself on now 30-40 something college grads is fooling themselves. All those students are now bought in to keeping the dollar alive.
Software has gotten so over thought and bloated given a “too many cooks in the kitchen.” situation. Templating a git repo with appropriate dep files given mathematical constraints is not rocket science. The past needed to imagine software as out of this world to gain mindshare. Correct and stable electrical state is what really matters.
We are entering a new era of tearing down the cloud monolith for open ML libs that put machines to work, not people.
Behavioral economics has been running the US since before Reagan.
Alternatively, web is generally more valuable. You don’t buy a new washing machine because the current firmware sucks, but you will shop somewhere else if Newegg’s website is terrible. That relationship is generally true where people rarely test embedded software until after a purchase, but people tend to jump ship more frequently online.
Net result a lot of critical infrastructure and devices suck as much as possible while still getting the job done.
I’m building a house at the moment and I have been insisting that I am able to actually test all the built in appliances with power to see if the software is garbage.
I have found that most of the high end brands have a completely horrible user experience. Miele is the worst I’ve tried, and I found that as you go up the price range even inside that brand the experience gets worse.
The top end Miele induction cooktop takes over 5 seconds to boot up before you can even turn a hob on. The interface has a second of latency on presses. It took me probably 20 seconds to work out how to turn a hob on. I happened to be with my mother at the time and I asked her to try to work out how to turn a hob on and she had failed after 1 minute of trying and gave up and asked me.
It looks nice though.
The thing I find the most infuriating about it is that my attitude towards this stuff is just not understood by designers at all. They complain at my choices because the Miele appliances which they specified are “better quality”. And yet I feel like they can’t have actually tried to use them because as far as I can tell the quality is total garbage.
The mere idea of waiting for a kitchen appliance to "boot up" makes me angry. How did we normalize this madness? Telephones, TVs, car engine instruments, HVAC thermostats, why can't any of these be instant-on like in the 80s? Apply power and it starts working is a basic design principle.
Meh. Bootup time is irrelevant if the thing is always on. Many "dumb" microwaves won't let you use them until you set the clock after a power loss which creates an artificial "boot up time" of 5-120 seconds (depending on how complicated the procedure is; I remember microwaves that had absolutely obtuse clock-setting procedures).
Slightly off topic but imagine an induction cooker with the original iPod control wheel as it's power control.
We opted for a gas hob when we installed our kitchen. Mostly because I like the controllability when cooking. Obviously it's a nightmare for health and the environment but man it makes cooking easier.
Touch controls on induction cooktops/hobs are almost ubiquitous, and they have extremely poor usability in my experience. Liquids cause problems, and you need to be very careful not to move a pan or any utensils over the controls, or brush against them while concentrating on cooking. Apart from the other awful usability issues with the UI or icons.
I did a survey of all the cooktops/hobs I could find in my city, looking for something that would suit my elderly mum, and I didn’t find a single unit that was usable. Fortunately a salesperson knew of a recently developed “cheap” model from a noname brand, which had individual knobs, so I ordered that, it arrived an month ago so I got it installed, and it has worked very well for my mum.
Usability is not something that most people know to look for when making purchases, so most whiteware ends up with a hideous UI. People will buy shit, then complain, but it doesn’t change their future purchasing habits (e.g. looking for features, especially useless features!)
I bought a middling brand microwave with knobs that has reasonable usability, despite providing all features. The iPhone is another possible counterexample, although I fucking hate many of their usability decisions (remove all multi-tasking shit from my iPad - I only ever initiate it by mistake and I always struggle to revert my mistake - fucking floating windows and split windows and fucking ... at top of the screen).
The ability to clean the cooker is the only advantage of touch controls. I don't know how well the original iPod touch wheel would hold up in that environment but from a usability point of view it was excellent.
how is it a nightmare?
if you aren't getting that energy from natural gas, you'd mostly get it from a CO2 producing power plant, with efficiency losses going from heat (steam) -> electric -> heat (cooktop)
Even Gas cooktops without a pilot light are surprisingly inefficient with under 40% of the energy ending up in your pan. (Which is why the air several feet above the pan is so hot.) On top of this you end up venting air your HVAC system just used a lot of energy to make pleasant outside and/or breathing noxious fumes from incomplete combustion so Carbon Monoxide, NOx, formaldehyde etc
Induction stoves powered by natural gas power plants are more efficient than directly cooking with natural gas plus you can use clean solar/wind/nuclear/hydropower or oddballs like geothermal.
It’s even worse if you don’t size the burner to the pan. My wife always uses the largest burner with an 8 inch pan, probably 70% of the heat goes around and over it. Really made me want to switch to induction but I noticed the same thing that most induction cooktops have stupid, unreliable touch controls.
I think efficiency of a hob is pretty low on the priority list right? Certainly when framed in cost terms (gas being cheaper than electric). The total amounts are too small relative to hot water / home heating to make much difference. Especially if you go out of your way to find an induction cooker with a decent interface (there is at least one out there with knobs).
For most things which would need cooked on a hob for a long time we use an Instapot electric pressure cooker anyway (out of preference rather than efficiency concern).
It depends on what your paying for fuel, propane is shockingly expensive at 3$/gallon right now + delivery fees but let’s use 3$ for 91,452 BTU which works out to 11.2c/kWh before you consider efficiency.
At an optimistic 40% efficiency for a stovetop vs 90% for an induction cooktop the breakeven is 25c/kWh which is well above average US electricity prices. Worse that 40% assumes properly sized cookware in contact with the burner, no pilot light, and ignores the cost of venting air outside.
As to total costs, at full blast a propane burner only costs around 1$/hour but some people do a lot of cooking.
Same goes for car MMIs. Tesla is almost fine when it comes to the latency (still far behind an iPad e.g.) but other manufacturers are just atrocious in this respect
The industry will do just fine. In all my years assisting in the hiring process (I'm software, but due to my EE background I was often asked to help with interviewing EEs), I've never noticed a shortage of EE applicants. OTOH, we had a lot of trouble finding enough software people to hire.
The reality is that EE jobs are a small fraction of the software ones and supply is keeping up with demand, so there's no upward salary pressure.
> Yes, the embedded space pays terrible, and the employers don't seem great on the whole.
in europe c++ pay is in general ridiculously bad, I got some job ads this morning. Senior job in real-time trading in C++ in Paris, multithreading and linux knowledge, english first: 55-75k. Embedded senior C++ FPGA engineer in paris: 45k-65k. No bonus in either position. thanks but no thanks
Those job ads are both better than my current position. £40k for cross-platform C++ desktop app with both multi-core and distributed parallelism. PhD required. GPGPU experience preferred (notice that it's not CUDA experience because some users have AMD cards). Now, with two consecutive promotions, I could bump my salary up to £50k. Of course, to qualify for the second of those promotions, I need to receive personal commendations from three different professional organizations across at least two different countries.
This is true, trying to switch from FPGA's/RTL Design to something higher up the stack over the next few months for this reason. My employer does seem to have great difficulty hiring anyone with these skillsets but funnily enough, the salaries never seem to improve.
I wonder how much is just EEs looking at SWE resumes and going "why would I pay that much for this?! writing code isn't that hard" I definitely get that vibe from some of the local hw-eng companies.
And they may not be wrong, but.. sorry, that's supply and demand. If I have to go write stupid NodeJS stuff to get paid decently, I guess I'll have to go do that.
I worked at a place once where one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
The industry has basically screwed itself. It's pretty typical for companies to consider embedded/firmware as EE work that is done in the gaps of the hardware schedule. EEs generally make bad programmers which shouldn't be a surprise as their background is usually not in software development; I similarly shouldn't be hired to do EE work. Because of this the code bases tend to be abysmal in quality.
The salary for these positions tends to be tied to EE salaries which for some reason are quite low. So it's hard to attract good talent willing to deal with the extremely poor code quality and all of the other extra challenges this field has on top of normal software challenges.
Since few software developers are attracted to this niche there's not a lot in terms of libraries or frameworks either, at least not in comparison to most other software ecosystems. I've had a start-up idea for a while now to really close that gap and make embedded development far more sane in terms of feature development and such, but I worry nobody would even bother to use it.
I've been in the embedded space for years now and I've been considering bailing because the problems just aren't worth the pay.
> one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
This is, of course, wrong. However, I think I understand where this EE was coming from.
At the end of the day, once all is said and done, there's a minimal set of instructions necessary for a CPU to perform any task. One could add to that two more variables: minimum time and minimum resources (which is generally understood to be memory).
So, at least three optimization vectors: instructions, time and resources.
Today's bloated software, where everything is layers upon layers of object-oriented code, truly is pointless from the perspective of a CPU solving a problem along a stated combination of the three vectors listed above.
The way I think of this is: OO exists to make the programmer's life easier, not because it is necessary.
I believe this statement to be 100% correct. OO isn't a requirement for solving any computational problem at all.
Of course, this cannot be extended to algorithms. That part of the EE's is likely indefensible.
How about data structures?
Some, I'd say. Again, if the data structure exists only to make it easier for the programmer, one could argue it being unnecessary or, at the very least, perhaps not optimal from the perspective of the three optimization vectors.
It's nothing groundbreaking, although my idea alone wouldn't really help in the safety critical space.
If web development were like embedded development every single company would be building their own web server, browser, and protocol the two communicate over. It would take a phenomenal amount of time and the actual end product, the website, would be rushed out the door at the very tail end of this massive development effort. As the complexity of the website grows, the worse it gets. All of the features being sold to customers take a backseat to the foundational work that costs the company money either through initial development or ongoing maintenance. Plus there's very little in the way of transferable skills since everything tends to be bespoke from the ground up which poses a problem when hiring.
In this analogy that base layer is really just hardware support. This is starting to change with projects like mbed, zephyr, etc. There's still a lot to be desired here and these realistically only work in a subset of the embedded space.
My idea comes in after this. Keeping with the analogy, consider it Ruby on Rails or NodeJS for the embedded world. Certainly not appropriate for all things, but a lot of what I have worked on professionally would benefit from this.
> one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
At a previous job, the project lead (mechanical) assigned the embedded team (2 people) writing the firmwares for 3 boards (multi-element heater control, motor controller and move orchestrator with custom BLDC setup, multi-sensor temperature probes) in 2 weeks over christmas, because the junior EE said “I can control a motor with arduino in 30 minutes.” My only guess as to why such a disconnect from reality was possible is that the EE had a MIT degree, while I’m self-taught, and that we had always delivered our firmwares on time and without bugs.
I mean, it's the same phenomenon I've seen even in webdev where a PM or UX person who has produced a whole series of mocks then hands it off to the "programmers" and demands a short schedule because... well... they did all the hard stuff, right? You're just making it "go."
People naturally see their own hard work and skills as primary. I know enough about HW Eng and EE to know that it's actually really hard. That said, it doesn't have the same kind of emergent complexity problems that software has. Not to say that HW eng doesn't have such problems, but they're a different kind.
If you see the product as "the board", then the stuff that runs on the board, that can end up just seeming ancillary.
Oh, no, this was super common. When the Arduino (and, soon afterwards, the Pi) were launched, for several years, about 20% of my time was spent explaining higher-ups why there's a very wide gap to cross between a junior's "I can control a motor with Arduino in 30 minutes" and "We can manufacture this and make a profit and you can safely ship it to customers".
Don't get me wrong, the Arduino is one of the best things that ever happened to engineering education. Back in college I had to save money for months to buy an entry-level development kit. But it made the non-technical part of my job exponentially harder.
Ha. Try telling a customer that even though he's prototyped his machine with three arduinos (he used three because he couldn't figure out how to do multitasking with just a single one...) in a couple of weeks, it will be a $100k project to spin up a custom circuit board and firmware to do the same thing. And no, we can't reuse the code he already wrote.
Physical design and logic design talent is actually _super_ in demand right now but you have to have real silicon experience for which FPGA can help you get.
Google/Apple/Nvidia/Qualcomm/Broadcom and gang are having problems retaining talent right now.
I have an EE background but worked in webdev for many years. I got pretty bored with webdev, and had the opportunity to get into embedded Rust development, so I did. Its been really awesome, learnt so much both in embedded but also hardware engineering.
But now I think I'll head back to web development for my next job - I think web is better as an employee or as a contractor. It seems to me there is more freedom in webdev, often its possible to work from home or abroad... Embedded on the other hand is encumbered with equipment, oscilloscopes, devboards, protocol analyizers, you name it and often requires onsite hours.
And then there is the pay and job availability... I recall interviewing for a role that involved designing a full-blown operating system for use in the auto-industry. The role was paying 40-50K euro a year in Germany, which is insanely low. React developers earn substantially more, but are required to know substantially less.
The only reason why (I can imagine) someone would chose embedded is probably because its very rewarding and mentally stimulating. Its awesome creating physical devices. Its awesome interfacing with the real world. Its awesome deep diving into bootloaders, memory allocations and exercising a fundamental understanding of computing.
fully agree. rust links to its stdlib statically made its binaries too large for many embedded boards though, one reason I could not switch to it.
embedded is hard to get remote positions due to hardware involvements, which sucks. on a positive side, the job could be more secure sometimes, but then the low pay truly ruined everything, overall it remains to be negative.
I mostly do backend and devops at work, and C++ is quite present, not as main language, but writing libraries to be plugged into Java, .NET and node frameworks.
> You may also look into Kernel Programming for a lucrative systems programming career.
This is the road I have taken since I started to work professionally, but I yet have to find a lucrative job. I know that I am paid more than microcontroller devs, but less than web devs. The market for kernel developers is not that big either.
I’ve been in both web and embedded for the last 20 years, and to me web dev “done right” is just as much if not more complicated than embedded, and very similar. In both cases, you have a distributed systems (everything action you take, system wise, is asynchronous, and very uncertain). Debugging is a pain in both cases, because you have only limited access to the system under test (especially in the field), and things like minification / optimizing compilers make it hard to necessarily track bugs.
Embedded has the advantage that you can usually trust your peripherals more (they’re not a user that randomly presses CTRL-R), there is less framework and thirdparty stuff in your way, and the timing constraints are usually better understood. Webdev also suffers from a ton of UX and UI (animations, wizards, complicated workflows, error handling that needs to be error handled that needs to be error handled), which often results in very complex state machines.
In both cases, observability is key, especially for debugging purposes. I use the same patterns in both cases: a lot of state machines and event driven design, because I get “debugging” for free (I just need to log state + events and I can reproduce any scenario).
The big advantage of web, and one that I always have to adjust to when I come back from a period of time in embedded, is that you can YOLO a lot. YOLO to prod, you can always easily revert. YOLO the UI, because you can trust the user to refresh their page or workaround it (your hardware peripheral won’t). YOLO everything because you’ll never really brick stuff. YOLO timing because you usually don’t have hard or even squishy-hard realtime requirements. YOLO behaviour because you can have realtime feedback on how your system is doing, and pushing a new version is only minutes away.
But “web dev” done right, and I really like having something fast, robust, repeatable and observable, is quite the challenge too.
I realize I mostly focused on the frontend side here, but you can easily see how backend dev is highly complex too (but that often falls under system programming too).
Lots of framework, in fact most of the runtime environment is not under your control at all (cloud services, for example). Complicated deployment and distributed patterns, often requiring many services to collaborate for a single functionality (DB, monitoring, cache, load balancing, backend itself, storage in just the simpler cases!). And none of this is something you can just plug your debugger into and hack away at it. Very similar to embedded in how I approach it.
Deployment is similar too, in that you will often have a builder system that creates artifacts than then get deployed asynchronously, resulting in heterogeneous environments at least for a while, with needs for proper API boundary design.
Seeing the parallels between both worlds allowed me to use CICD, blue/green, feature flags, data pipelines to the cloud, UI patterns from the then nascent javascript framework explosion back in the late aughts, when that stuff was almost unheard of in embedded environments. I scripted my jtag environment using rhino (javascript on the server, back before node came out) to collect and hot reload pieces of code, while being controlled in the browser. I made a firmware app store for midi controllers I was building.
Embedded UIs also highly benefit from knowing patterns from web frontend, because they are highly event based too, and really benefit from attention to detail (say, animations, error handling, quick responsiveness). At any point the user interacts with the device, through a button, a touchscreen, a sensor, UI feedback should be immediate and obvious (even if it’s just a LED turning on). Good web applications are absolutely amazing in how they achieve that (through CSS, through JS, with nice layout / graphical design patterns).
It’s good to know this, I think I take for granted the experience I have in web dev. It’s just intimidating to be at the bottom of a large climb in a new discipline.
I did Linux kernel work for a decade at my old company. Left due to low pay.
Also worried about my employability. Not much call for C programmers in 2022. You’ll always fear losing your job.
I love low level though, I do embedded projects for fun! I can probably sling back-end Python for 1.5x the salary. I wish embedded payed better, but it doesn’t and therefore I won’t help alleviate this “shortage”.
If you are ever looking for C opportunities, my team would probably like to be aware of you when the hiring freeze is over. We work on next-generation volatile and non-volatile storage projects including an open-source storage engine.
not many, I did it, jobs are scarce, most of the time you port a new version of CPU to the kernel, or add a few device drivers, the industry does not need a lot of those engineers, and per my experience, not compensated that well either. these days many kernel programmers work for big companies.
I remember Apple having a lot of related listings so I'd assume companies that are somehow involved in OS development (Microsoft, Google, maybe RedHat/IBM and Intel).
Significant portion of kernel code is written by FAANG, for example. There are other companies that also pay reasonably well. You can check some statistics of contributions to Linux kernel here https://lwn.net/Articles/909625/
Defense industry has a few such jobs, working a lot with RTOS's, network devices, sometimes even embedded for signal processing/control systems, etc... The big defense contractors probably pay better than working directly for the govt depending on where you live.
Isn't this a sign of a problem ? where important domains with hard problems pay few .. while some dubious applications are throwing money on css plumbers ?
There's a strike happening here in Ontario schools by janitors and education assistants and early childhood educators, because they want more than a 2% raise on their $40-$50k year jobs ($30k USD, and look at inflation #s...). The government is going to use a special "shouldn't be used" clause in the Canadian charter or rights and freedoms to force a contract on them and ban a strike and forbid collective bargaining despite it being a charter right. These are people who clean poop, shape young minds, and keep critical systems running, and so on.
All of this to say: difficulty and importance of a job seems to have almost nothing to do with either the pay one gets, or the respect one gets.
No, it's always been the case. Just because something is difficult, doesn't mean it pays well. Otherwise, teachers and mathematicians would all be millionaires.
I feel almost exactly the same way as you. I've flitted around the research/applied research boundary for ML for the last decade+, so I write plenty of Python. I enjoy the way Python gets out of my way so I can focus on interesting research problems. But the actual act of writing code in C++ is so much more fun, once you get good enough at it that the footguns don't trip you up.
The embedded AI space is a pretty good place to make money writing C++. I was in autonomous vehicles for a bit. It didn't really interrupt my post-Google compensation trajectory, and I got to write performance- and safety-critical C++
My local bus/transit agency was hiring an embedded programmer a couple of years ago and while I thought it would be fun to do embedded stuff and get to work on busses/trains (!) the pay was like half my web dev salary. (Granted there is a pension but it's not that good)
If the government did its job and we had sound money, and taxation were explicit instead of this wacky adjustable-and-unpredictable-devaluation that is inflation, there would be no need for cryptocurrency.
The point of money is to be spent, not to hold it. You can't have an asset that's both good to hold over the short and long term. (I forget where this is stated.)
That's because the point of an economic system is to trick other people into making food for you, and holding money instead of trading it obviously isn't going to lead to that.
> The point of money is to be spent, not to hold it.
Why? Why prioritize spending now rather than later? If I can't defer consumption, I will always need to work, and I can't retire. That would be financial oppression.
> You can't have an asset that's both good to hold over the short and long term.
I am abnormally curious why this is the case.
> That's because the point of an economic system is to trick other people into making food for you
I'd rather they make food for me when I'm old, instead of when I'm young and I can make it for myself. How is this an argument against saving?
> holding money instead of trading it obviously isn't going to lead to that.
While it's true that if everyone saved in the short term, we'd see persistent recessions, it's bound to end, as people start to want to spend their earned money.
In "Die with Zero", an argument is made to allocate and spend everything you've made, because this life is all you've got to do so. I agree with this book.
Even in extreme deflation, people buy things they need. For example, technology prices have been in exponential free fall for decades, yet today the world's largest companies have a lot to do with selling computers, phones, and/or software.
The only reason for government currency inflation is balancing the (wasteful) budget, after the government spends beyond its means. This allows soft-defaults (government paying bond coupons in a diminishing currency) instead of hard-defaults (government failing to pay bond coupons). But both kinds of defaults should be seen as bad, by investors.
To get an idea of the scale of the misallocation, compare the tax revenue to GDP with government spending to GDP. The US government pays for 44% of the yearly domestic product, while only taxing 9.9%. This amounts to a LARGE benefit to those printing money and spending it before price inflation hits.
> Why? Why prioritize spending now rather than later? If I can't defer consumption, I will always need to work, and I can't retire. That would be financial oppression.
I should've said spent or invested. You can save by turning money into I bonds or stocks for retirement, and that works because it funds something productive (stocks/corp bonds) or the government would like you to defer consumption due to inflation (I bonds).
But remember money (vaguely) represents stored up labor. In nature you can't retire because you can't save up labor; saving money isn't just like a squirrel storing nuts for later, it's also like if the squirrel could put off gathering them at all.
Long term investments (stocks) are better for retirement because they're riskier.
> I'd rather they make food for me when I'm old, instead of when I'm young and I can make it for myself. How is this an argument against saving?
By "other people" I meant farmers, so you're probably not doing that work yourself. There will probably be farms because other people are continually buying enough from them to keep them producing, but if nobody buys something for long enough it won't get cheaper, the market will cease to exist because nobody will produce it anymore. Saving money/retiring in this way is kind of parasitic.
> Even in extreme deflation, people buy things they need.
There was a Great Depression where people stopped being able to do that, you know. Deflation really upsets people. Deflation in Germany also got the Nazis elected.
It's not good to think about "the government spending beyond its means" as if it was a household. The government's the one that invented the money in the first place. A fixed money supply doesn't make sense on a planet with an increasing population that all want to use your money because of how awesome the US financial empire is.
And not only did the US fail to get inflation despite best efforts from ~1980-2020, other countries are seeing inflation now without extra deficits.
You are making a lot of interesting arguments. Thanks!
> In nature you can't retire because you can't save up labor
That is true. What I can do, I guess, is ensure that I will have what I want in the future. If I don't know what I want, then I want to buy a small piece of everything (index funds).
> are better for retirement because they're riskier.
From the very article you linked: "Having no earnings and paying no coupons, rents or dividends, but instead representing stake in an entirely new monetary system of questionable potential, cryptocurrencies are undoubtedly the highest risk investment known to man."
Of course, here it seems Wikipedia is a bit opinionated, and gambling would be an even higher risk investment. But at that point I'm sure the risk-return relationship would break down.
The Kelly criterion is the optimal to size up how much risk to take over time. If there's even the slightest chance that losing a bet/investment will leave you with zero wealth, then you may not place all your wealth on that bet.
> Saving money/retiring in this way is kind of parasitic.
As some people save, others spend. As I mentioned with "Die with Zero", I will spend all my money eventually. If people do not synchronize their spending with the rest of the economy, the effects will average out, and one individual does not matter. Unfortunately, people tend to buy high and sell low, going on trends. And I've noticed both national and cryptocurrencies go through this - albeit with the interest rate mechanism, national currencies don't drop 80-90% from time to time.
> A fixed money supply doesn't make sense on a planet with an increasing population that all want to use your money because of how awesome the US financial empire is.
As the population growth slows, or capital reaches diminishing returns as some other finite resource is depleted, it is only responsible to think of the economy as a household, and the money as a reflection of real, existing goods and services, rather than future ones, because future ones might not exist, and debt will become less "productive".
I distinguish between "productivity" of a debt and its yield. Taking on debt means signing up to pay future interest. But the resources you receive in exchange might make it worth paying interest, or might not. This is what I call "productivity" for lack of a better vocabulary. And interest rates or yields are orthogonal to this.
> The government's the one that invented the money in the first place.
The government merely partly captured the monetary velocity multiplier effect caused by fractional reserve.
Fractional reserve was invented by private banks, which create most of the money supply. In spite of their enormous power, and the enormous profits in fees and interest as a result of money creation, banks still go bankrupt by abusing their power, requiring bail-outs (with public money) or bail-ins (with depositors' money).
One such bail-out was immortalized in Bitcoin's first block ("The Times 03/Jan/2009 Chancellor on brink of second bailout for banks").
> And not only did the US fail to get inflation despite best efforts from ~1980-2020
In 1980-2020, the CPI went from 82.4 to 258.8, or a ~3.14-fold increase, or a 3.14^(1/40) ~= 2.9% compounded average growth rate. That is not failure to get inflation, it is overinflating by 45% compared to the 2% objective.
What we are seeing now (>10% inflation) is the result of irresponsible pandemic government budgets being mopped-up by the central banks.
By the way, PPP cost $170,000 to $257,000 per retained job-year. I bet employees on payroll during the pandemic were not paid that much.
The problem is I really like writing C++ (and Rust, etc.)! So I'm cultivating other "systems programming" type career paths; DB internals, have always fascinated me, so I'm trying that direction. Working fulltime in Rust now, but it's hard to find work there that isn't crypto-tainted.
Other people have pointed out that lower pay in embedded has to do with the influence of EE salaries. Which are sadly lower than they rightfully should be.