Hacker Newsnew | past | comments | ask | show | jobs | submit | fgblanch's commentslogin

Funny enough, in 2023, Asus released a good very close to iphone Mini-size android phone. The asus zenfone 10. https://www.asus.com/us/mobile-handhelds/phones/zenfone/zenf...


Not only is it not a very small phone, I can't even properly type this message one handed. It's also not a good phone which I regret purchasing.

Zenphones until the 10 had easy to unlock bootloaders, leading to long in official support by the community. However with the 10 ASUS stopped that tool and they've been lying ever since that they're still working on it.

My zenfone is now on its final major android update, the rather minor android 15 version, and I've only got two years of security updates left until I need to look for a new phone. That's one thousand euros for barely four years of software support, it's such a disappointment.

That aside the camera is lackluster, it's auto whitebalance is horrific, turning the same snowy scene into a sunset or illuminated by fluorescent light depending on the phase of the moon and it's sampling questionable making images much more blurry in a surreal way. But the optical stabilisation is seriously impressive. Overall I preferred the pixel 4a's images though. A smaller phone and my zenfone's predecessor.

At least I get to just plug it into my stereo thanks to the 3.5mm jack though.


Agree on the Camera, bafflingly bad.

Thumbs up on the headphone jack though. Can't fault it there.


I don't know why it was reported so frequently as a compact phone, the ZenFones are much larger than the iPhone mini. It's the same size as a standard iPhone or Galaxy S series.


https://www.phonearena.com/phones/size/Apple-iPhone-13-mini,...

I was convinced you were wrong but that's correct. The Mini is much smaller and the Zenfone is about the same size as the regular iPhone.


The Zenfone 10 is closer to an iPhone than an iPhone mini.

iPhone 16/Zenfone/13 Mini (in mm)

Height: 147.6/146.5/131.5 - the mini is 15mm shorter than the Zenfone which is only 1.1mm shorter than an iPhone.

Width: 71.6/68.1/64.2 - the mini is 3.9mm thinner than the Zenfone which is 3.5mm thinner than an iPhone

Depth: 7.8/9.4/7.7 - the Zenfone is significantly thicker than the iPhones.

Volume: 82.4/93.8/65.0 cubic cm - the Zenfone is physically larger than an iPhone 16 by a decent margin.

The Zenfone simply isn't close to an iPhone mini size. It's larger than an iPhone by volume and the depth does matter when holding it. If we're talking about front-edge to opposite front-edge, we're talking about 87.2mm for the iPhone vs 86.9mm for the Zenfone and 79.6 for the Mini. The Zenfone saves you 0.3mm in grip-distance over an iPhone, but a Mini saves you 7.6mm in grip-distance.

Heck, let's look at weight. A Zenfone is 172g, iPhone 170g, iPhone mini 141g. The Zenfone is the heaviest of the three.

One of the big limiting factors for Android phone manufacturers is the battery. iOS is a ton more efficient. The Zenfone is thicker to accommodate a 4300mAh battery compared to the iPhone 16's 3561mAh (21% larger battery). And the Zenfone's battery is kinda small by Android standards.

People often don't think about the challenges of making a small phone. The electronics don't shrink. If you need a certain square mm for those electronics, they take up a larger percentage of the interior on your mini. You don't need as large a battery because the screen it is powering is smaller, but not proportional to its size - you're still drawing the same power for all the electronics. So you have a smaller percentage of interior space for the battery and you need a larger battery relative to the interior space - or you need to sacrifice battery life as Apple did with the mini.

For example, the iPhone 13 mini is 84.4 sq cm and has a 2438mAh battery. The iPhone 13 is 104.9 sq cm with a 3240mAh battery. The iPhone 13 is 24% larger, but can accommodate a 33% larger battery - because the electronics take up basically the same space regardless of form factor.

So to make an Android mini, you'd be sacrificing a lot of battery life. The Zenfone is not a mini. Its grip-size is basically identical to an iPhone. In every way, it's much more an iPhone than a mini.


Of phones I owned 10+ years ago:

iPhone 6S: 138.3/67.1/7.1 = 65.9 cc the mini is just barely smaller.

iPhone 4S: 115.2/58.6/9.3 = 62.8 cc smaller than the mini.

Treo 650: 113/59/23 = 153 cc which is about the same volume as a Galaxy Z Fold 3... the displacement was so uncomfortable, people often used hip holsters for the former.


Commenting just to appreciate this analysis. I totally bought the marketing that this was a small phone, while it seems it just had a small screen.


but I missed MicroSD slot. I think my requirement is simple: not too wide (<=70mm), has 3.5mm audio pack, and has MicroSD slot.

and end-up only Sony products comes out. and I sacrificed performance for a shorter phone so I bought Xperia Ace III.

but I don't know when will my ISP shutdown GSM-1800. If this happens I have to buy Xperia 10 series then.


Despite sibling comments, it's still a smaller phone compared to others from the same year. I have one and I'm extremely satisfied with it.


I don't know if it would come with the deal, but Bytedance web crawler is known to be the one with top number of requests per day among AI crawlers (src: https://blog.cloudflare.com/declaring-your-aindependence-blo... ) I guess one of Perplexity challenges is to have their own web index and of course that starts with having a powerful crawler. Also having a powerful crawler is useful for capturing tokens to train models. If that technology comes with the deal, it makes perfect sense for Perplexity to acquire them.


Funnily enough the Cloudflare blog identifies Perplexity engaging in dodgy practices to avoid robots.txt denylists:

> Sadly, we’ve observed bot operators attempt to appear as though they are a real browser by using a spoofed user agent. We’ve monitored this activity over time, and we’re proud to say that our global machine learning model has always recognized this activity as a bot, even when operators lie about their user agent.

Clearly not working too well.


Lol, I had to report Facebook using the documented Facebook crawler UA, coming from Facebook ASN as a bot to them because they misclassified it. Don't expect too much from their global machine. I wonder if this case also included people manually reporting it...


It is painful to read. To me in retrospective the major mistake in the presentation is that it barely talks about end users and how the iPhone enabled a new world of use cases. It is only about business/corporate, features and specs. When analyzing the iphone on those dimensions all the reactive action items are doomed. They had not a chance to compete with that analysis.


This is a very bad article, probably to attract traffic to their PM course. For me is very simple, there are AI Product Managers. But those are not PMS that use AI tools but rather those that manage AI products. And there is a difference between Software 1.0 and Software 2.0 (AI or ML model based) products. These products are not managed as engineering but science. These products are managed with experiments. These products are non-deterministic. These products have virtually infinite inputs and outputs. These products do not address one problem but many at the same time. So, if you ask me of course there is a thing such as AI Product Manager but not what the author, and many other PMs, think.


I think the only proven use cases for ZUIs have been Maps, Calendars, and Photo apps (Apple Photos and Google Photos). The last two to switch between different time period views: Day->Month->Year for photos and Day>Week>Month for calendars.


There is maybe some room for design too (Figma), but there are many times I wish it DIDN'T do that and encouraged better frame/file organization instead.

Similarly, it's too often used for (mis)organization, like in Miro or Prezi or "mind map" apps. I feel like those try to shoehorn information into Sherlock-like "mind palaces" in a way that only makes sense for the creator but are inscrutable to everyone else and just makes information harder to find later on. They always lead to some sort of pixel-hunting where the presenter zooms out and in, out and in, out and in, wasting time on navigation and placefinding instead of information dissemination.

-------

On the other hand, it CAN be useful for some visualizations, like taxonomy: https://itol.embl.de/itol.cgi or https://www.onezoom.org/life.html

"Drill down for details" like in disk space analysis: https://www.youtube.com/watch?v=BKClylmlv3w&t=1s (or similar one in D3: https://observablehq.com/@d3/zoomable-sunburst)

Treemaps: https://observablehq.com/@d3/zoomable-treemap

I think the overall point is some types of hierarchical information naturally lend themselves to "drill down" type UIs more than others. When you have levels of detail you don't need to see at first glance, drilling/zooming is awesome. When you have a bunch of things of equal hierarchy, presenting them in a big flat pile isn't any better in the virtual world than in the real world... it's the digital equivalent of 10,000 sticky notes on a wall.


Tree-Maps: A Space-Filling Approach to the Visualization of Hierarchical Information Structures:

https://www.cs.umd.edu/~ben/papers/Johnson1991Tree.pdf

HCIL Archive: Treemap Home Page:

https://www.cs.umd.edu/projects/hcil/treemap/

How Ben Shneiderman’s Treemaps Found Place In The Museum Of Modern Art:

https://analyticsindiamag.com/how-ben-shneidermans-treemaps-...

>Ben Shneiderman was inspired by the 1960's 'Op Art' and the exhibits that he came across at the Museum of Modern Art in New York. Op Art or Optical art is a form of kinetic art related to geometric designs that create movement in the eyes.

Ben Shneiderman's Treemap Art:

https://treemapart.wordpress.com/

>This site features draft designs and full views of the Treemap Art project. View more about the exhibitions.

>By Ben Shneiderman

>Although I conceived treemaps for purely functional purposes (understanding the allocation of space on a hard drive), I was always aware that there were appealing aesthetic aspects to treemaps. Maybe my experiences with OP-ART movements of the 60s & 70s gave me the idea that a treemap might become a work of art. That idea was revived in 2013 by way of my contacts with Manuel Lima who produced a beautiful coffee-table book on the history of trees that has several chapters on treemaps and their variations.

>I believe that there are at least four aesthetic aspects of treemaps:

>1. layout design (slice-and-dice, squarified, ordered, strip, etc.), >2. color palette (muted, bold, sequential, divergent, rainbow, etc.), and, >3. aspect ratio of the entire image (square, golden ratio, wide, tall, etc.). >4. prominence of borders for each region, each hierarchy level, and the surrounding box

Ben Shneiderman: Every AlgoRiThm has ART in it: Treemap Art Project:

https://www.youtube.com/watch?v=4LW4m6BdQXI

>Ben Shneiderman, distinguished university professor, University of Maryland, College Park and National Academy of Engineering member, spoke at the October 16, 2014 DC Art Science Evening Rendezvous (DASER). Ben Shneiderman described the invention of treemaps and showed examples of its usage. He then turned to the aesthetics of treemaps, which led him to create the “Every AlgoRiThm has ART in it: Treemap Art Project” exhibit on view in the Keck Center first floor galleries (www.cpnas.org). He demonstrated how users of the free treemap application can generate their own artworks, without programming.

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — Don Hopkins — October 1989 (a paper I wrote when I worked with Ben Shneiderman at his University of Maryland Human Computer Interaction Lab):

https://donhopkins.medium.com/the-shape-of-psiber-space-octo...

>The Pseudo Scientific Visualizer

>Darkness fell in from every side, a sphere of singing black, pressure on the extended crystal nerves of the universe of data he had nearly become… And when he was nothing, compressed at the heart of all that dark, there came a point where the dark could be no more, and something tore. The Kuang program spurted from tarnished cloud, Case’s consciousness divided like beads of mercury, arcing above an endless beach the color of the dark silver clouds. His vision was spherical, as though a single retina lined the inner surface of a globe that contained all things, if all things could be counted.

>[Gibson, Neuromancer]

>The Pseudo Scientific Visualizer is the object browser for the other half of your brain, a fish-eye lens for the macroscopic examination of data. It can display arbitrarily large, arbitrarily deep structures, in a fixed amount of space. It shows form, texture, density, depth, fan out, and complexity.


Sounds like we need a treemap of treemaps!


I love the name "WOPR" and the reference to Wargames film


We had a system at an old job that someone had dubbed WOPR for no other reason than they liked the movie. A colleague and I wrote a similar system that filled in some gaps and decided that the best name for it was "Big Mac" to delightfully mix references. First thing I thought of.


I had the same question. I guess too is a common protocol. Any insights on why these flights are gears down?


Not an expert, but I've seen the following commented on other test flight videos: on initial test flights, landing gear is always kept down to minimize risk. If a sudden landing is needed, gear is already down, no risk of equipment getting stuck, less mental load for the pilot to perform emergency landing, etc. Basically, when testing, you want to minimize the variables being tested. When airworthyness is validated, then you can test landing gear systems.


I would love to see Perplexity.ai in the benchmark. It has completely replaced Google/DDG for information questions for me. I still use DDG when I want to do a navigational query (e.g. find the URL for a blog i partially recall the name).


While kagi was the product that most brought me joy in 2022, perplexity.ai has been the one for 2023, even though i only recently started using it. It's just been a joy to be able to iteratively discuss most of my searches.

EDIT: here's a search for tire (I don't know anything about tire, so maybe there's much better links out there, but this is pretty much what I was expecting. Not an ad or SEO in sight.) https://www.perplexity.ai/search/tire-3iuI9T6BQUSvu2tAhgsRmA...


I am wondering if you can use AI chat exclusively for your search needs? If not, what does the perfect integration looks like?


I've been really enjoying Perplexity as well. It's a much better Internet/search focused experience than ChatGPT, Bing, or Bard. For anyone interested, until the new year (~20 more hours?) there's a code for 2mo free Pro: https://twitter.com/perplexity_ai/status/1738255102191022359 (more file uploads, choose your model including GPT4)


Me too. I only heard about it this morning and it looks kinda perfect so far.


Very nice hack! I did a very similar project integrating ChatGPT bot but using WhatsApp business account instead of fake facebook contact. I got my account blocked when Meta discovered I'm not a business unfortunately. I'll retake the project with the FB account, it seems much easier.

Great job!


No, the reason macs are better on LLMs is memory bandwidth 800Gb/s on Ultra 2 . I couldn’t find a good source but it seems that Ally mem bandwidth is around 70GB/s


A combination of high memory bandwidth and large memory capacity is necessary for good performance on LLMs. Plenty of consumer GPUs have great memory bandwidth but not enough capacity for the good LLMs. AMD's Phoenix has a memory bus too narrow to enable GPU-like bandwidth, and when paired with the faster memory it supports (LPDDR5 rather than DDR5) it won't offer much more memory capacity than consumer GPUs.


> won't offer much more memory

A mini PC with that chip, 1 TB of storage and 64GB of ram (both replaceable) costs like 800€ and fits behind your monitor. Getting that much memory in a consumer GPU is definitely quite a bit more expensive. Also, for comparison an M2 Ultra with that amount of storage and ram is 4800€.

So I am not doubting that a 6 times as expensive computer is probably "better" by some metric, but for that drastic difference I am not sure that is enough.


While I 100% agree on the price comparison, you’ll need to reach some threshold for LLM performance to consider them as usable. As Someone not very knowledgeable at the topic, the pure difference in the numbers lead me to question if you could even reach that usable performance threshold with the 800€ mini PC


Note that when referring to memory capacity, I specified LPDDR5, because that's the fastest memory option. If you want to go with 64GB of replaceable DDR5, you'll sacrifice at least 18% of the memory bandwidth. (And in theory the SoC supports LPDDR5-7500, but I'm not aware of anyone shipping it with faster than LPDDR5-6400 yet.) So you could get to 64GB on the memory capacity with a Phoenix SoC, but only by being at a 10x disadvantage on bandwidth relative to an M2 Ultra—which doesn't make a 6x price difference sound outrageous, given that we're discussing workloads that actually benefit from ample memory bandwidth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: