Hacker Timesnew | past | comments | ask | show | jobs | submit | chocolatkey's commentslogin

That “compact JSON” format reminds me if the special protobufs JSON format that Google uses in their APIs that has very little public documentation. Does anyone happen to know why Google uses that, and to OP, were you inspired by that format?


I think you may be referring to JSPB. It's used internally at Google but has little support in the open-source. I know about it, but I wouldn't say I was inspired by it. It's particularly unreadable, because it needs to account for field numbers being possible sparse. Google built it for frontend-backend communication, when both the frontend and the backend use Protobuf dataclasses, as it's more efficient than sending a large JSON object and also it's faster to deserialize than deserializing a binary string on the browser side. I think it's mostly deprecated nowadays.


I don't know the reason TextFormat was invented, but in practice it's way easier to work with TextFormat than JSON in the context of Protos.

Consider numeric types -

JSON: number aka 64-bit IEEE 754 floating point

Proto: signed and unsigned int 8, 16, 32, 64-bit, float, double

I can only imagine the carnage saved by not accidentally chopping of the top 10 bits (or something similar) of every int64 identifier when it happens to get processed by a perfectly normal, standards compliant JSON processor.

It's true that most int64 fields could be just fine with int54. It's also true that some fields actually use those bits in practice.

Also, the JSPB format references tag numbers rather than field names. It's not really readable. For TextProto it might be a log output, or a config file, or a test, which are all have ways of catching field name discrepancies (or it doesn't matter). For the primary transport layer to the browser, the field name isn't a forward compatible/safe way to reference the schema.

So oddly the engineers complaining about the multiple text formats are also saved from a fair number of bugs by being forced to use tools more suited to their specific situation.


JSON is slightly worse than you describe - the JSON language doesn’t restrict you to 64bit floats, but most implementations do as you describe. On the other hand, the JSON language doesn’t support NaNs or infinities, so the union of language and implementation means that in practice JSON is strictly weaker than IEEE 754.


I don't know but if I had to guess.

1. Google uses protobufs everywhere, so having something that behaves equivalently is very valuable. For example in protobuf renaming fields is safe, so if they used field names in the JSON it would be protobuf incompatible.

2. It is usually more efficient because you don't send field names. (Unless the struct is very sparse it is probably smaller on the wire, serialized JS usage may be harder to evaluate since JS engines are probably more optimized for structs than heterogeneous arrays).

3. Presumably the ability to use the native JSON parsing is beneficial over a binary parser in many cases (smaller code size and probably faster until the code gets very hot and JITed).


Also, flexbuffers.


A nonprofit I help out just moved from Slack to Discord for a very simple reason: Slack pricing was too expensive, and as the amount of people increased, the price continues to climb. Discord is free


It is not, you just aren't the customer but the product sold.


free as in beer is clearly what they mean. They are a non-profit talking about pricing.


That’s incorrect, it’s trying to load an asset (hardcoded unique per-extension path) for each extension, there is a huge list of these in the source code: https://raw.githubusercontent.com/mdp/linkedin-extension-fin...


Note that Threema has had a recent change in ownership to a German investment firm. Supposedly nothing will change but I can’t help but be wary


Just being owned by an offshore company doesn't mean that they still can't be infiltrated. But as you pointed out, just because Company A creates an app does not mean that Company B can't come in later to take control.


The alarming extent of US-affiliated signals intelligence collection is well-documented, but in the case of Threema it's largely inconsequential; you can still purchase the license for it anonymously, optionally build from source, and actively resist traffic analysis when using it.

That is to say: it allows a determined party to largely remain anonymous even in the face of upstream provider's compromise.


Where can I learn more about hardware acceleration of WebP on mobile OSes? I haven’t yet come across a resource that confirms this is actually the case. I know it should theoretically be possible using the VP8 hardware decoders but I thought those were expensive to warm up just for images


I was able to use the (free) app “ReadID Me” to decode passport information months ago


Google’s Tink crypto library had a slightly technical page to help with that: https://developers.google.com/tink/choose-primitive


Having used Tink, I can't stand it.

I'd love to just replace it with age for all use encryption use cases, but unfortunately age doesn't do AEAD without involving a password.


It's somewhat popular for piracy, when you know that your site could never pass the Apple/Google store reviews. I've seen it with, for example, manga aggregators


I disagree regarding the choice of codec. Currently, I have no issues receiving, saving, and viewing H265 streams. Any modern CPU/GPU can handle them natively (I use a 2018 Intel CPU w/ QSV), any modern desktop or mobile device (I use both Android and iOS) can stream it, and the recorded video takes up less space. What are you using that requires transcoding?


If like myself you're a Linux and Firefox and Android user, H.265 support is extremely lacking; you're probably ok on a modern Android, but you'll not be able to view any of the streams or do scrubbing etc on desktop in Firefox, nothing video related is going to work in the Frigate UI, you won't be able to preview videos etc and will have to download them and use VLC. This might not sound like an issue, but it's a huge pain in the arse if you actually want to use it day to day.

All in, H.265 is unsuitable if you use a specific set of software/tools that is quite a common combination; Linux/Firefox/Android.

The original commenter is correct, if you're one of these people like myself, avoid H.265 like the plague until support is better and be sure to buy cameras that also support H.264.


For Hikvision sourced cameras, previews and exports work, but you can't play clips without transcoding. Unfortunately I haven't found a transcoding option that doesn't completely swamp my CPU (with 3 cameras) so I'm living without ability to play clips right now.


This would break way too many websites to be feasible. And if implemented, would be something requested on so many sites that users would learn to automatically say yes which would weaken the power of permission prompts in general.

For example, almost every major Japanese book/comic site uses canvas in their e-reader


The best solution would be if canvas only allowed displaying pixels on the page but not drawing (meaning you need to bring your own drawing library) so that it would be unusable for fingerprinting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: