Hacker Timesnew | past | comments | ask | show | jobs | submit | watermelon0's commentslogin

RustFS is a good and simple-to-usr alternative for MinIO.

There is a recent one, which shows that the weight was generally stable after 1 year of discontinuation of GLP-1.

> In this cohort study of adults with overweight or obesity who initiated treatment with injectable semaglutide or tirzepatide and discontinued the index medication between 3 and 12 months after initiation, 19.6% restarted the index medication and 35.2% received an alternative treatment in the year after initial treatment discontinuation. The average weight change 1 year after index medication discontinuation was relatively small; however, there was considerable individual-level variability.

https://dom-pubs.pericles-prod.literatumonline.com/doi/10.11...


Thanks for sharing. Note that the data quality from this study is quite low because 54.8% of the cohort eventually restarted their medication or transitioned to an alternative therapy (mostly a different weight loss medication).

I don't know why a study that focuses on discontinuation didn't split the groups that restarted or transitioned against the group that actually just stopped.


Seems like they switched to avocado oil recently:

> Water, Yellow Pea Protein*, Avocado Oil, Natural Flavors, Brown Rice Protein, Red Lentil Protein, 2% or less of Methylcellulose, Potato Starch, Pea Starch, Potassium Lactate (to preserve freshness), Faba Bean Protein, Apple Extract, Pomegranate Concentrate, Potassium Salt, Spice, Vinegar, Vegetable Juice Color (with Beet).

From: https://www.beyondmeat.com/en-US/products/the-beyond-burger


How about iPhone 16e/17e? Base MacMini M4?


If 95% of what app does is calling a DB, then the bottleneck is in the DB, not with the PHP.

You can use persistent DB connections, and app server such as FrankenPHP to persist state between requests, but that still wouldn't help if DB is the bottleneck.


Sometimes it’s still the app:

   rows = select all accounts
   for each row in rows:
       update row
But that’s not necessarily a PHP problem. N+1 queries are everywhere.


Depending on what you are doing, the above is not necessarily bad.. often much better than an SQL that locks an entire table (potentially blocking the whole DB, if this is one of the key tables).


Country code TLDs are also reputable, but you might lose access if you move or if something happens to the country.


Unless you manage to leak your private host/client SSH keys, this is close to being as secure as it gets.

I'd say that HTTPS (or TLS in general) is more problematic, since you need to trust numerous root CAs in machine/browser store. Sure, you can use certificate pinning, but that has the same issues as SSH host key verification.


CA compromise is very rare and difficult. There are much easier attacks on TLS than that (notably, attacking insecure validation methods; the problem isn't that CAs aren't secure, it's that validation methods and their dependencies are insecure). Besides, the CAs for TLS only covers transport security; authentication+authorization would be handled securely through OIDC, using temporary sessions and not exposing the true credential, often combined with 2FA. Even you successfully attack a TLS server, two factors, and an active session, it only works once; you have to keep pulling it off to remain inside.

Compare that to malware that just copies a developer's ssh private key off the disk (again, almost nobody ever password protects theirs). This just happened recently on a massive scale with the npm attacks. Or intercepts the first connection from a client host and, again, because nobody ever validates keys, injects a false host key, and now they're pwnd indefinitely. Or, again, companies that do not strictly validate host keys, meaning immediate MitM. There's like a dozen ways to compromise SSH. It doesn't have to be that way, but it is that way, because of how people use it.


Great, now we need caching for something that's seldom (relatively speaking) used by people.

Let's not forget that scrapers can be quite stupid. For example, if you have phpBB installed, which by defaults puts session ID as query parameter if cookies are disabled, many scrapers will scrape every URL numerous times, with a different session ID. Cache also doesn't help you here, since URLs are unique per visitor.


You’re describing changing the base assumption for software reachable on the internet. “Assume all possible unauthenticated urls will be hit basically constantly”. Bots used to exist but they were rare traffic spikes that would usually behave well and could mostly be ignored. No longer.


GUI might not be as powerful, but in my experience, it's similarly non-intuitive as alternatives, such as VirtualBox / UTM (macOS) / VMware Fusion/Player.

For anything more complex (e.g. GPU passthrough) you will need to drop into manually modifying XML files.


You are missing one option:

0) JavaScript must be abolished from the browser


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: