Maybe Apple has embraced one of these "Agile" methodologies where you basically eliminate your QA group in the promise that developer created unit and integration tests can cover the quality gap.
QA and dev approach software with different mindsets, and I've noticed in Agile projects where QA is mostly or entirely missing, there is a skill gap.
Edit:
People - I'm not trying to be flippant. I really have seen a decrease in quality in the transition to Agile. We use Scrum or Kanban where I work, and while overall the approach to estimation and the smaller scheduling and estimating increment is better, in many projects where I work there is no _separate_ QA group. A few project do retain it (though re-branded as "Performance, Stability, Reliability" - PSR) and it does help, but generally this PSR doesn't cover everything our old QA groups used to cover.
In any case, I have no insight in to what is happening inside Apple. I'm really just suggesting that given the prevalence of Agile methodologies these day, perhaps a non-ideal transition to Agile is a contributing factor to their quality issues.
I agree with you, the lack of QA skills in a team is a dangerous thing. But I disagree with the notion that the agile methodologies are the problem, more like an excuse from the same people that would use any other excuse to cut corners[0]. A QA column (or set of columns) fits perfectly with Kanban.
Years ago I started seeing companies combining QA and product teams[1]. It required product people with a bit of technical background but, when done right, worked incredibly well. Nowadays not only I've stopped seeing that trend, but most product teams I see are completely ignorant of technical matters. QA teams seem to be more or less missing or seriously understaffed.
---
[0] Not to speak of the need to be "agile" without changing a single thing in the process. I can't count the number of times I've seen teams using what is, essentially, a waterfall process (including long development cycles with no contact with the customer at all) self-describing themselves as agile.
[1] So the same team defining the new features also creates the tests, both automated and manual, and checked the results.
I am sure we have both spent time in Agile projects. Let me ask you. When stories end up in the QA column do they just test the story or everything the story could possibly touch ? Be honest.
Because that right there is the problem when you are dealing with systems with lots of interconnected components. There just isn't enough time for the "randomly try to break anything" phase.
Yes, they usually only test the feature and you are right, when you are dealing with plenty of interconnected components, it's more difficult to do complete tests and there's a higher chance of something slipping by unnoticed.
However, on every build the unit, integration and acceptance tests should run and should alert if something is not right[0]. Automated testing can't cover all the use cases, but there's no reason they couldn't catch most of the issues we've seen with OSX lately (a good example of a system with plenty of interconnected components). On top of that, the good QA people I know always run some random manual tests not related with the story itself on every feature release just to make sure everything is still as expected.
The time is there if it is, along with the expectations, managed correctly. Only in smaller chunks instead of a big, single block of time at the end.
Even though, there's always the chance that there's a completely unforeseen side effect (a test can prove that there's an error, but can't prove that there are no errors), all of this is to try and minimize that chance. In general, I think that these issues appear more often related to specific systems (lots of interconnected components) or environments (lack of proper testing), rather than with specific methodologies (back in the non-agile methodologies days, programs still had bugs).
[0] Like, for example, an edge case of a method being fixed. That can have knock-on effects if other parts of the system were relying on that method failing.
This seems like a legitimate point. This is true. OS development is typically too large to run through a single monolithic team.
It is possible that the team in this area is applying a methodology that has different QA best practices. It could be a poorly applied Agile methodology at fault here.
Although it's impossible to know without more information.
I think it has more to do with the complete lack of testing infrastructure and tooling that Apple has produced over the years, despite the explosion of APIs and developers on their platforms.
Even on iOS, which rumors have being more of a focus compared to macOS, still has a long way to go to catch up to other platforms. If they required these tools internally, no way would they be this far behind.
Can you elaborate on what's missing? I do a lot of graphics work, and their macOS debugging tools for OpenGL were terrible, but the ones for Metal (which came out first on iOS) are pretty cool! I also use Instruments a lot and some of the other more esoteric tools supplied with Xcode. Not to mention the static analyzer, Address Sanitizer, and lately the Undefined Behavior Sanitizer. But I'd love to have even more tools! What else is out there that they don't supply?
I am talking specifically about unit testing, ui testing, integration testing and then supporting those tools in some type of ci system.
And to be clear, they provide these things in the form of XCTest, XCUITest, and Xcode bots, but they aren't good enough. Tons of resources have been spent by the community and major corporations trying to fill in the gaps and offer useful alternatives, but the closed nature of Xcode and its build environment make this difficult.
> Maybe Apple has embraced one of these "Agile" methodologies where you basically eliminate your QA group in the promise that developer created unit and integration tests can cover the quality gap.
WTF are you talking about, there is no Agile methodolgy about getting ride of QA. Sounds like you have a problem with Agile, so are looking for some reason to drag it in. You might as well blame the keyboards they use.
Not agile methodologies. The “omg we must do a release with splashy features once a year” mindset.
They released exactly one version (Mountain Lion IIRC Edit: Snow Leopard) which was just fixing bugs and improving system stability. Everything else was “oh look another shiny toy” where toys are increasingly inane, iOS-like and unfinished (hell, they can’t even fix Photos).
QA and dev approach software with different mindsets, and I've noticed in Agile projects where QA is mostly or entirely missing, there is a skill gap.
Edit: People - I'm not trying to be flippant. I really have seen a decrease in quality in the transition to Agile. We use Scrum or Kanban where I work, and while overall the approach to estimation and the smaller scheduling and estimating increment is better, in many projects where I work there is no _separate_ QA group. A few project do retain it (though re-branded as "Performance, Stability, Reliability" - PSR) and it does help, but generally this PSR doesn't cover everything our old QA groups used to cover.
In any case, I have no insight in to what is happening inside Apple. I'm really just suggesting that given the prevalence of Agile methodologies these day, perhaps a non-ideal transition to Agile is a contributing factor to their quality issues.