Sorry, I wasn't clear - from what I know, on the consoles only few Unreal games on one specific console made it to 60fps. I could be wrong, and have old information, and yes this greatly relates to older Unreal engines, and from what I've heard (over and over from people working with it) it was the script language and kismet (maybe I'm confusing terms here) that was the limiting factor.
Now I'm a generalist, that ended up as tools engineer, so I might be talking complete non-sense, but again when I've talked to anyone from my previous studio or others that was the overall sentiment - Unreal can't give us 60fps unless Epic are directly involved and custom version for a game is done.
(Obviously you can do 60fps always, but it may not be up to the visual quality of other 60fps games).
Looking at UE4 and blueprints, it seems that they've understood their weak point and worked aggressively on fixing it. So it could be that UE4 would deliver 60fps just fine matching visual quality of other products at the same fps. (aaah, now someone would add - what about resolution? heh, I dunno - I think upscaling is just fine, but then you have all these crazy journalists digging up... frame rate is much more important though)
> But with the advent of 8-16+ GB of ram in consumer computers can we really say 2-3GB for windows rendering is heavy?
This attitude is why we have phones with 1G of memory and 1Ghz+ CPU's lagging when just trying to read email.
Yes, there is more memory available than before, but that doesn't mean we should just waste it for useless stuff.
For example if we eat 2-3GB of memory on a system with 8GB of memory, thats over 25% of the usage. Then some applications that are actually used. (Let me use my own process mem counts here...) 1.3Gb for Firefox, 0.6Gb for Android emulator, 0.5Gb for Visual studio....
And there would have gone over half of the 8GB of ram!
I am sure my current memory usage would be half of what its now if developers would actually give some shit about memory usage... Not just allocate too much memory because "there is so much memory it doesn't matter"
> Yes, there is more memory available than before, but that doesn't mean we should just waste it for useless stuff.
I disagree. I think the point is that it's not useless, by its own existence. It saves dev time (which allows more - and better! - programs to be built), it allows richer features and interactions, it lowers the barrier to entry for maintenance work, and most of all prevents constant re-inventing/implementing the wheel, which is reflected to the user in a lack of many common types of bugs, annoyances, and accessibility issues.
I'd argue against it saving dev time. See my above comment, HTML+CSS is not the fastest way to design an app. If it is all you know then yes, it is faster, but inherently it is not a /good/ system. It gets worse even faster if you add JS to the mix!
I think it's a great system. It's surely the fastest way for me personally, in spite of it not being "all I know". With Bootstrap or Foundation, you need very little code to make something look nice, there is great documentation and lots of online resources.
If I want to make a UI in C#, I can pull up WinForms and use one language. If I want to have nice separation of style from code, I can use XAML (which is purpose designed for making layouts easy) and be using 2 purposefully technologies.
There is a lot more mental overhead to HTML + CSS + JS (and then the DOM somewhere) than to just type in
Button accept = new Button("Accept!", new size(100, 45));
form.Add(button, 300, 50);
If I want to style my button, I set some properties on the button.
(And if I want to use XAML, I send a file off to a designer who pops it into a purpose built XAML editor who can play around with the styling and layout to their heart's content!)
The thing is, web tech was designed for a non-interactive page based model, and then interactivity was added.
Then someone wanted curved boarders, and what had to be done to make curved borders was rather nasty. So after a good long time web standards caught up with fashion and an ability to add curved borders was added to CSS.
And damn you to the neither regions of hell if you want squiggly borders (Back to nasty scaling PNGs, or a few thousand 1 pixel DIVs!) However, I'm sure if those do come into style and enough people complain, that squiggly borders will also be added to the spec.
In comparison, a real UI framework lets you override the Paint function and do whatever you want.
On that note, about 2 quarters of freshman Java will have students be able to create custom UI widgets that do whatever is desired.
For someone who already knows programming? Point them at OnPaint, "here override it, draw whatever you want, have fun!" The set of concepts needed to modern UI development on the desktop is tiny. This is a problem space that has had three decades of effort put into making it simple and easy.
Again, want separation of concerns? About 2 hours of XAML will have someone making custom widgets, in fact that is the entire way XAML works! (It was sort of a mind twist for me that the easiest way to make an image button in XAML was not to override an OnPaint method and draw an icon, but rather place the XML nodes for an Icon and a Button together!)
But take someone that knows programming, well first you have to introduce them to CSS, which is a bunch of fields that you have to memorize (or rather just keep looking up) with a number of different types of units supported (none of which do exactly what you want, but I'll grant that is the case for all resolution independent UIs). Then of course they get to learn the newest flavor of HTML (semantic everything, don't you dare use <bold>). Then you get to show them JavaScript (Packages? Namespaces? A language in which imports can silently override my variables causing havoc and dismay? Wow, C's #DEFINE is considered harmful, but this is a whole new level.)
But hey thankfully CSS3 makes 3 box layouts no longer an exercise in pain! So that is a good thing. (I've been using flexible layouts that auto adjust the positioning and sizing of elements since the late 90s, it is nice that the web finally caught up by introducing yet another boxing model!)
Of course the final lesson is "oh and by the way this all sucks so you really want to use a third party JavaScript UI framework. Also CSS is sort of painful so you may want to use a preprocessor. Yeah just shove your content in the middle of your client side generated DIVs. Well yes generating the exact same page on every visitors PC is sort of silly and a huge thermodynamic waste, but it is currently in style so stop complaining about it. Yes I know the comments section takes 10 seconds to load. What do you mean BBSs on a 9600 baud modem responded faster?"
(Edit: Of course the last bit about responsiveness isn't relevant to using a web stack for client side apps, it is just me complaining about the poor state of the web in general)
> If I want to style my button, I set some properties on the button.
You cannot be serious. A 1000+ line long setup() method that creates a ton of one-off elements, sets a million properties, and saves them all as instance variables is in no way superior to using some XML to structure your elements, and then a separate, extracted layer for styling. I've done the procedural shit before, in Java, UIKit, Cocoa, you name it. There's just no way you can actually think that is better. Presentational separation is a good thing. (btw you can write your UI in the same procedural way in Javascript if you think it is superior/are a masochist).
The rest of your comment is very rambling and does not really have a point so I will hold off on replying to it. I write polished, pixel-perfect, cross-platform interfaces, and HTML/CSS is the best tool I have found for the job. Nobody will ever say it is perfect of course, but IMHO it beats the hell out of WinForms! You are right, it is a lot to learn for a 1st year developer, but you are going to get a shitty UI out of a 1st year developer, who cares?
> You cannot be serious. A 1000+ line long setup() method that creates a ton of one-off elements, sets a million properties, and saves them all as instance variables is in no way superior to using some XML to structure your elements, and then a separate, extracted layer for styling. I've done the procedural shit before, in Java, UIKit, Cocoa, you name it. There's just no way you can actually think that is better. Presentational separation is a good thing. (btw you can write your UI in the same procedural way in Javascript if you think it is superior/are a masochist).
Depends what I am doing.
Shipping a large scale commercial app with thousands of UI elements? I sure as heck pull everything out.
But if I am making an app that has a dozen UI elements and maybe a couple of screens in total? It isn't worth my time to switch tabs between my code and an XML file.
> I write polished, pixel-perfect, cross-platform interfaces, and HTML/CSS is the best tool I have found for the job.
Pixel perfect is trivial in any system. Winforms can do pixel perfect, and has a damn hard time doing anything else!
A UI that scales across screen sizes? A UI that auto adjusts itself as needed? You can do it in CSS (many websites do!) but isn't fun.
And I do wonder how you found CSS, a layout language which until recently couldn't even do simple math, to be the best tool for creating UIs.
There have been decent CSS pre-processors out for awhile, sure, those work. If you do parts of your layout using JavaScript at run time, hey, that can also work.
Isn't inefficient coding, which is what we're discussing really, possibly needlessly forcing the specifications of hardware to be more and more powerful rather than getting the best out of the hardware we have now by better coding?
I rather expect causality flows the other way: coding is getting less efficient because hardware is getting faster. Or as the old saying goes, "Intel giveth, and Microsoft taketh away."
Adding HTML to the GUI coding is re-inveting the wheel. There are native toolkits for UI, there are crossplatform UI libraries for a reason. They work, they are fast.
Most desktop applications have a terrible interface compared to modern websites. That wouldn't be the case if the UI libraries were half as good as some people here claim.
Most new desktop applications have a really crappy interface. Mostly because they are trying to copy some new homepage layout and instead is missing lots of information and buttons and fading in/out text and buttons instead of letting me click them directly. Not to mention taking several seconds to load a form that on my 8MHz Atari would have loaded instantly. Yes I'm old, get off my lawn!
Desktop apps from the late 1980s to around 2004 or so were very usable. There were clear UI standards in place for most platforms, and thus a high degree of consistency between applications, even from different vendors.
Even Java's AWT, which came at the tail end of this and pretty much only targeted UI functionality that was common between different UI systems, could create desktop apps that were and are much more usable than the inconsistent web-inspired desktop apps we deal with today.
Let me add myself to the chorus of people standing up for native desktop applications. Sure, there are bad and good examples of both desktop and web UIs. But whats so horrendous about the original Mac OS interface? What was so bad about Gnome 2?
It was a fear years ago when Quake was open sourced [1]. In Valve's case, the programmers probably had to audit all the source code in order to reduce possible exploits.
Oh, the engineer marketing moto: make a good product and customers will come. So appealing and so false!
I'm not doubting your experience, but this is such a common misconception that I must point it out.
If usage of your product is not what puts food on the table, by all means go for the engineering marketing approach. It's genuine.
If you need customers to have a roof on top of your head, though, learn marketing. It's not evil, it is necessary, and strange as it may seem, it is as important to success as product quality.
I think he means if your website contains information google can find it. simple as that. only thing you really need to know is that text on its own is more easily searched than images. but that should be obvious to any one.
Digital had bumper profits just before they went under - it's typical, as they go upmarket, to get greater profits. But I'll accept that you get what I'm saying. :)
Another data point: id was killing it (and, to us, Carmack remains undisputed). But they completely lost out the next generation, to Unreal, because of vehicles. I don't think that'll happen here, just disclaimin' past performance is no guarantee of future success.
Though, to be fair, the Quake engine underlied Call of Duty, the most successful franchise (I believe), and it really showed in the framerate. Note: no vehicles. And yes, past tense.
He was talking aboout the time around Unreal Tournament 2004, and at that generation, it was a big thing (kinda like waving grass was at one time).
Sorry, I don't recall which interview (and it would be hard to google unless there's a transcript). It might have been one of his keynotes, perhaps the one with Rage on an iPhone. I'm pretty sure it was a long one (at least 1.5 hours). It was one of the big popular videos on HN/r/programming (not an esoteric one).
There is no way in hell they could release it as a VR game and not as regular FPS game. That would be just really bad move from Valve.