FreeCAD has improved immensely over the past 2-3 years in terms of stability and features. A decade ago it was not uncommon for me to experience crashes from it randomly losing its GLX context or the constraint solver segfaulting for some reason. Now it's rare for it to crash at all for me, though I still run into a lot of constraint solver errors that are a pain to deal with.
However, despite the recent improvements, I still cannot recommend it for new users compared to commercial solutions for the sole reason of the Topological Naming Issues: https://wiki.freecad.org/Topological_naming_problem
This issue has been probably the #1 problem I've had with FreeCAD since I started using it. And though I've learned how to design parts to get around the problem in most situations, it's a huge hurdle for newcomers to understand and get around. Luckily there's a fork that fixes a significant number of the issues: https://github.com/realthunder/FreeCAD_assembly3 and https://github.com/realthunder/FreeCAD
I've also heard of Ondsel, which is supposedly a much more user friendly version of FreeCAD that also includes some fixes to the issue: https://ondsel.com/
- Realthunder’s branch contains unique, forked changes that will cause file incompatibility with core freecad if you use them unknowingly
- Core freecad is ahead in many, many ways and improving quickly
- the Realthunder branch is likely a dead end
The TNP mitigation from the Realthunder branch is very close to being enabled in 0.22, and the feature freeze for 1.0 is weeks away. 1.0 is currently targeted for early August.
My feeling is that it would be much better to learn what the topological naming problem is, and how it can be worked around, and then use Ondsel 2024.2 or a 0.22 weekly release until the TNP mitigation is mainstream. (It’s likely to be in 0.22 very soon indeed)
My thinking is straightforward: there are and will be more tutorials and more support for this route, and learning about how to mitigate TNP is not wasted info: it will teach you useful skills for making generally robust designs, TNP or not.
Among others, Mango Jelly Solutions has a recent video about TNP, and Brodie Fairhall’s video on the topic is worth seeing.
As mentioned elsewhere earlier [0] essentially all of Ondsel’s user-friendliness is actually core 0.22 (development release) FreeCAD and different addon choices (like the tab bar).
Which is not to say that Ondsel 2024.2 is a bad way to experience those things, or that the Ondsel Lens (cloud collaboration suite) is not interesting, because it surely is.
It’s just to say it is only much more user-friendly if you’re not already using the 0.22 dev releases (that are considered to be generally as stable as 0.21 and are in wide use)
(I upvoted you for the rest: I too am waiting for the TNP mitigations before I recommend it to less technically-focussed people)
Did you ignore this part that came a bit before that?
> I decided to map the file into the address space instead of reading all of it. By doing this, we can just pretend that the entire file is already in memory and let the poor OS deal with fitting a 40 GB blob into virtual memory.
Why take a vaguely rhetorical statement and then complain it contradicts a more concretely accurate statement before it?
> Did you ignore this part that came a bit before that?
No, just the opposite - I read that and that's exactly why I'm saying there was no need to read the whole thing into memory before starting to execute it.
> Why take a vaguely rhetorical statement and then complain it contradicts a more concretely accurate statement before it?
Because it's a contradiction in what they've written?
The article doesn't state the entire program has to be read into memory before it starts executing. Instead the article states that during execution, for the highest inputs, the entire program needs to pass through memory for execution to finish.
You can only fit 1-2 graphics cards in a “normal” ATX case (each card takes 2-3 “slots”). If you want 4 cards on one machine, you need a bigger/more expensive motherboard, case, PSU, etc. I haven’t personally seen anyone put 6 cards in a workstation.
In a water cooled config the cards only take 1 slot. I’ve got 2 3090s and am buying another two shortly. Preemtively upgraded the power to 220v, found a 2kw PSU, and installed a dedicated mini split. I’m also undervolting the cards to keep power and heat down, because even 2000w is not enough to run 4 and a server grade CPU without tripping. When you start accumulating GPUs you also run into all kinds of thermal and power problems for the room, too.
I was fortunate enough to scoop up a bunch of Gigabyte RTX 3090 Turbos. Cheap used eight slot SuperMicro (or whatever), a cabling kit, four 3090s, boot.
Sincere question: Is installing and running a mini split actually cheaper than racking them in a colo, or paying for time on one of the GPU cloud providers?
Regardless, I can understand the hobby value of running that kind of rig at home.
I personally haven’t done the calculation. I have rented colo space before and they are usually quite stingy on power. The other issue is, there’s a certain element to having GPUs around 24/7/365 to play with that I feel is fundamentally different than running it on a Cloud Provider. You’re not stressing out about every hour it’s running. I think in the long run (2yr+) it will be cheaper, and then you can swap in the latest and greatest GPU without any additional infrastructure cost.
You have to pass the context between GPUs for large models that don't fit in VRAM. Often ends up slower. Also, tooling around AMD GPUs is still poor in comparison.
The selling point is described quite well under "The Pitch" section of the README.
Unlike most other toolkits, this one doesn't actually handle its own rendering/contexts. It outputs the low-level vertex buffers and textures that you can pull into your own graphics pipeline. Which means you can integrate it into any sort of 3d application or backend that you want.
Or worse, a small player developing something and then finding out after the fact that a junk patent prevents them from even selling the thing to begin with.
See, this to me is the stupidest part of the whole system. Its essentially saying that 'first movers advantage' of an idea is something that can be protected for the life of a patent.
You can be tripped over by patents you don't even know existed in unrelated fields, owned by people who didnt even work with the patent. So stupid.
A significant portion of the examples in that link follow the form:
This person is so holy that they drink water.
The very fact that it was a sign of unusual holiness to drink water seems to undermine the point. It's like saying "in the 2020s people mostly didn't use cell phones. Look at this passage about a group called the Amish, see what I mean?".
> The Information reported some of the gruesome incidents that have occurred at the Gigafactory in Austin, Texas, where one out of every 21 workers were reportedly hurt in 2022. The data is derived from the required injury reports Tesla submits to OSHA.
Can anyone familiar with OSHA/factory safety laws explain to me how a facility that reports 1/21 workers being injured in a single year isn't immediately shut down and fully investigated?
Is this a manipulation of reported data? Or does OSHA really have no teeth regarding this situation?
Because if you invert it, it's "the average employee suffers some injury every 21 years. That injury might be a papercut or a twisted ankle or a missing arm.
OSHA tracks this and is a much more reliable source than a random article.
It actually highlights probably the major problem. The article you linked specifies that a _single_ incident, accident or complaint can result in _multiple_ violations linked to it.
9 accidents, 4 incidents, 7 complaints. For 20 cases, 48 violations were issued. All in California, which may or may not have a subjective impact on how these cases are handled.
I tend to agree that this stuff looks like small potatoes and they're getting extra scrutiny based on what I saw back when I worked at a (non-Tesla) factory.
How does this compare to other car manufacturers in the US? What about China and Mexico? Strange that The Verge and the Information don't provide those statistics...
Yes this is one of those situations where responsible employers report more minor injuries, and then it is used against them by people who don't know better.
Guess what... the meatpacking plant in Iowa with 98% undocumented immigrants is not reporting all their injuries because they are threatening their workers with deportation.
For those that think this is an exaggeration or joke, it's actually not. I've definitely worked for large companies doing electrical work whereby it's standard to report every single thing that requires a Band-Aid. Filling out the paperwork is so obnoxious that I'd literally claim I have electrical tape wrapped around my fingers as a preventative measure. Some companies just encourage it more. Strong argument could be made I was in the wrong because a staph infection in one of those cuts could lead to me having to take time off and trying to explain why I didn't report the small cut.
I can understand why though. If someone comes away with just a cut it's still nice to know why because it could have been from something much more serious where the person just got lucky. It's not as if they actually care about papercuts. They care if it was a near miss or not.
There are multiple reported injury rates that measure different things, just as there are multiple reported unemployment rates.
This particular metric would indeed include a paper cut. A better metric is the number of injuries that required time off from work or reassignment of duties. That’s something that Tesla reports, but the article doesn’t include it.
I’m not the person you asked but my 2c (since I agree with their point on Vulkan)
Very few video games are made with Vulkan. DirectX is the primary API.
Android is the only place where Vulkan really has an interesting market share.
For a beginner, it has an incredibly steep learning curve vs DirectX as well. So given the low usage and high friction to pick it up, you have a really poor ROI.
DirectX and Metal are much more conducive to getting results quickly and efficiently.
> For a beginner, it has an incredibly steep learning curve vs DirectX as well.
That's only because you refer to DirectX as a whole which includes plenty of older and APIs. If you want to start with those you can just as well start with OpenGL. if you want to jump straight into D3D 12 then that's not much differrent from Vulkan.
The topic was why not Vulkan for a beginner, and D3D12 is a lot less work than Vulkan to get the same results.
And I’d still recommend D3D11 over OpenGL for a beginner unless they really need multi platform. There are better resources, and less setup work up front.
Honestly though, if I was recommending any graphics api to start, it would be Metal. It has the best mix of ease of use to modern low overhead api.
>The topic was why not Vulkan for a beginner, and D3D12 is a lot less work than Vulkan to get the same results.
It really isn't. It's OpenGL vs D3D11 all over again. They have similar amount of complexities, comparing them for the sake of learning isn't too productive.
With that said: Vulkan has a few but very excellent tutorials while D3D12 is a lot more "read the docs or consult your local graphics guru". I'd say for self-learners Vulkan is simpler to pick up just because of resrouces.
>Honestly though, if I was recommending any graphics api to start, it would be Metal.
Yeah, I've heard metal is nice. Shame it isn't really an option for me.
I'm not the parent commenter, but I'd like to explain their logic, which has at least a modicum of reason to it.
About 99% of desktop video games (by far the largest clients of graphics APIs) target Windows, and therefore target either Direct3D 11 or Direct3D 12. This includes free-to-use game engines including CryEngine, Unity, Unreal, and Ren'Py. Almost all the famous, proprietary, high-performance game engines (id Tech, Frostbite, Slipspace, REDEngine, Source) target D3D exclusively. Vulkan is clearly a second-class citizen on Windows. Some engines target OpenGL, and they tend to be used in (hurriedly dashed-out) console ports, but in almost all cases they exhibit worse performance than their D3D competitors.
Vulkan is completely absent from MacOS and iOS, where Apple has pushed its own API, Metal. OpenGL on MacOS is deprecated and is stuck on 4.1, missing all the advancements in 4.6, which include mesh shader support.
Many Android games are likely still running GLES. Vulkan is pretty hard to get started with, because things that are implicitly handled by the OpenGL global state machine now have to be explicitly handled by the developer, and chances are the developers of the millions of throw-away microtransaction-laden game apps on Android aren't writing their own rendering engines in Vulkan.
Therefore, despite all the positives of Vulkan—open-source specification, cross-platform support, SPIR-V shader target allowing shaders to be written in any language (HLSL, GLSL, other esoteric languages that compile to SPIR-V), an extension mechanism allowing fast iteration and updates—it has a fairly uphill battle.
EDIT: I was incorrect, id Tech supports Vulkan exclusively. But it is a minority in a sea of D3D-first engines.
While I was incorrect about id Tech (and have edited my comment), I never made the point that any of the other engines didn't target Vulkan.
Where else is my comment untrue? Many engines and rendering back-ends have only recently completed a Vulkan-based implementation. I am confident in my assessment that the large majority of existing implementations are still running OpenGL and/or Direct3D, if on Windows.
I am just doing game dev on the side but i think nowadays the graphics abstractions are fairly similar in how they work (the modern abstractions, i.e. Metal, D3D12, Vulkan). Of course ideally you choose the graphics abstraction that is "native" to the platform, but vulkan seems to be supported very well on windows (many AAA game use it and it works great, many games run even better with vulkan abstraction than with their d3d12 counterpart). I use vukan so my graphics can run on windows and linux (which is why i chose vulkan instead of d3d12).
You are correct that idTech targets Vulkan (and they have some great GDC talks to boot)
They are however very much the minority.
I am suspect of your claim about Vulkan abstraction layers running better than DX12. If there is a performance difference, it’s likely elsewhere in the stack and just tangentially related.
I haven't done this stuff for quite a while, so my memory might be foggy, but the main advantage of Vulcan was that you can control all the CPU locking rather than the API doing it. This allows you to do stuff like prepare on one thread and submit on another, etc.
But that would be negated if you're using an abstraction layer.
However, despite the recent improvements, I still cannot recommend it for new users compared to commercial solutions for the sole reason of the Topological Naming Issues: https://wiki.freecad.org/Topological_naming_problem
This issue has been probably the #1 problem I've had with FreeCAD since I started using it. And though I've learned how to design parts to get around the problem in most situations, it's a huge hurdle for newcomers to understand and get around. Luckily there's a fork that fixes a significant number of the issues: https://github.com/realthunder/FreeCAD_assembly3 and https://github.com/realthunder/FreeCAD
I've also heard of Ondsel, which is supposedly a much more user friendly version of FreeCAD that also includes some fixes to the issue: https://ondsel.com/
EDIT: Here's actually a better read of the topological naming issue, what's being done about it, and why it's difficult to fix: https://ondsel.com/blog/freecad-topological-naming/