OP dances around the key context that this isn’t hidpi, but rather a 3rd party hack that uses hidpi rendering to supersample their “native” 4k resolution by 2x, since the end result looks more pleasing to them than the native 4k render.
It’s actually around 1.5x for the default resolution out of the box and 1.3x for “more space” setting on m1/m2 MacBook Air. 1.1x supersampling on Macs makes it worse because down sampling to pixel alignment becomes a hot mess.
Those numbers of 1470x956 are “points” or “looks like” values, not the size of the frame buffer. The frame buffer for “looks like 1470x956” is exactly 2x that, or 2940x1912. On a 2560x1664 display, that’s a 1.148x scale factor. Again, nowhere near 2x, even on the “more space” setting.
And that itself is a recent policy change from just in the last two months; as of January United's official policy [1] matched the FAA's in only requiring checked devices to be powered down
It’s really a question of whether a team believes bugs are defects that deserve to be fixed, or annoyances that get in the way of shipping features. And all too often, KPIs and promotions are tied to the features, not the bugs.
Plus, I’ve been in jobs where fixing bugs ends up being implicitly discouraged; if you fix a bug then it invites questions from above for why the bug existed, whether the fix could cause another bug, how another regression will be prevented and so on. But simply ignoring bug reports never triggered attention.
First, you have the right to say nothing at all; there is no requirement to incriminate someone else to protect yourself.
Second, you can still generally invoke the 5th amendment during testimony even if you already claimed someone else did it. You aren't under oath until said testimony, so it still protects against you having to choose between committing perjury or self-incrimination, and doing so cannot be used as evidence of either.
No, you don't always have the right to say nothing at all. Courts can compel testimony and punish you if you don't.
And you plead the 5th after going under oath. And you can't just plead the 5th to any question. If the prosection puts you under oath and asks you your name, you can't plead the 5th to that
That's why I said generally - once testimony is compelled, it can no longer be used against you. And the definite exception for compelling your name is if the government already believes that you committed a crime and is trying to figure out who you are, and you cannot articulate specifically why your name could be incriminating.
5th amendment protections can include questions of identity, if the question of identity is relevant for incrimination. Like, if the government has a warrant for "Joe Smith", you're not required to testify whether that's you. It's usually a waste of time since could just prove it with the non-testimonial evidence that lead to your arrest, but the protection does exist.
Most of the world also doesn't have the same degree of protections against self-incrimination that the 5th amendment provides. If someone shot a person with my gun, while the police can obviously ask questions, in the US I have the right to not answer and force them to prove beyond a reasonable doubt who fired it.
It’s not a hard limit, especially if you aren’t pushing the frequency wall like Intel. AMD used to use a 2-way 64kb L1, Intel has an 8-way 64kb L1i on Gracemont, and more to the point, high-end ARM Cortex has had 4-way 64kb L1 caches since before they even supported 16kb pages.
Yeah, I was more just trying to paint a broad picture. Nvidia in particular I think had fast and large-ish L1 on Tegra (X2?) despite being tied to 4k pages.
ARM favored wider ILP and mostly symmetric ALUs, while x86 favored wider and asymmetric ALUs
Most high-end ARM cores were 4x128b FMA, and Cortex-X925 goes to 6x128b FMA. Contrast that to Intel that was 2x256b FMA for the longest, then 2x512b FMA, with another 1-2 pipelines that can't do FMA.
But ultimately, 4x128b ≈ 2x256b, and 2x256b < 6x128b < 2x512b in throughput. Permute is a different factor though, if your algorithm cares about it.
Your first example is a CPU limitation that Instruments doesn't model (does perf?), but is still mostly better than Intel chips that are limited to 4 dynamic counters (I think still? At least that's what I see in the Alder Lake's Golden Cove perfmon files...)
Your second example, is the complaint that Instruments doesn't have flamegraph visualization? That was true a decade ago when it was written, and is not true today. Or that Instrument's trace file format isn't documented?
A quick google suggests that British Columbia's building code only requires STC 50 which is "you can hear but not understand a neighbor's loud conversation" levels of isolation. Though maybe your city has stricter requirements?
Only 50? I think that's pretty good when considered on its own but STC doesn't look at the whole picture. STC ratings and requirements for discrete wall and floor assemblies are a thing but with suites/party walls apparent STC is what mattered whether it was the provincial code or local bylaws. ASTC is king.
AFAIK Jyrki came after WebP was already announced to add lossless support; rather I’d consider Skal the creator inasmuch as it was originally just an image container for VP8 intra. He was working on WebP2 at the time Google rejected JPEG-XL and also was not involved in that decision.
I designed the lossless format and its initial encoder. Zoltán Szabadka wrote the initial lossless decoder.
On2 Technologies had designed the lossy format and its initial encoder/decoder. Skal improved on the encoder (rewriting it for better quality, inventing workarounds for the YUV420 sampling quality issues), but did not change the format's image-related aspects that On2 Technologies had come up with for VP8 video use.
In the end stage of lossless productization (around February 2012) Skal had minor impact on the lossless format:
1. He asked it to have the same size limitations (16383x16383 pixels) like lossy.
2. He wanted to remove some expressivity for easier time for hardware implementations, perhaps a 0.5 % hit on density.
Skal also took care of integrating the lossless format into the lossy as an alpha layer.
reply