That'd be ideal because it would mean I could browse the internet without ads and just never use AI chatbots. Unfortunately I think ads are only going to spread and what we'll actually end up with is "more ads everywhere".
Agree. People understand and accept firing for performance issues. People understand and accept layoffs when they're a rare event needed to save the company from bankruptcy. What's not understandable or acceptable to most is the current trend of companies doing annual or even quarterly layoffs as an ongoing way to manage earnings.
This is why I've started telling younger people in the industry (I'm 52) "Don't worry about giving two weeks notice or doing knowledge transfer, it's not your problem. If you accept an offer and a better offer comes in before you start, take it and don't think twice. If the better offer comes in after you started, quit and take it, and don't think twice. If you're in the middle of a critical project at work and they're depending on you when a better offer comes in, take it and walk away." Loyalty in employment relations has been dead for a long time, since before I started working, but now it seems that even basic decorum on the part of employers is dead as well, and there's no reason to allow it to be asymmetric.
Age verification at the OS level makes no sense to me. Most households aren't going to have a separate device for every family member and so you will end up with a tablet or computer set up by one of the parents (and thus having their age stored) that will be used by both parents and children. Likewise, people generally won't create a separate account for every potential user.
> Age verification at the OS level makes no sense to me.
it's the only form of "age verification" which can be done in a somewhat privacy respecting way (as in at most leak the age)
the idea is to "bounce back" the "is old enough" decision to parent controls and let the parent choose (the Californian law doesn't quite do that perfectly, but goes into that direction)
and if you sell what is more or less a general purpose compute/internet access device with OS (which I do include phones into) I think it's very reasonable to either sell it to adults only (with a disclaimer it's "not for children") or include
proper parent controls
> Most households aren't going to have a separate device for every family member
in current times in the west it is very very common for many devices to be for one person only. Especially phones, or at least have different (OS) accounts.
but again this comes back to "parent controls", weather that is for a child (OS) account or a way to switch from a child profile to a adult profile doesn't matter
but in the end, the point of such laws should be to give parents tools to parent. As well as handling the case of parent acting in neglect by inaction. But if a parent intentional decides to give their children a device with their profile because they think it's fine than that should be their choice and responsibility.
> Likewise, people generally won't create a separate account for every potential user.
where it was possible I have not seen it not used, weather it's on a switch, gaming console or PC. It is the most convenient way of automatically separates logins, browsing history, game safes etc.
and the law als isn't made for that shared computer in the living room (through it will apply there). It's more about the devices children might use unsupervised, e.g. their phone.
That's why Meta paid for these os-based age identification laws[1], shifting the responsibility from itself onto the app stores. I agree it's probably preferable to do it on device instead of every website implementing an id check through shady as fuck[2] third parties like Persona. This whole thing is just such a mess though, people rightfully distrust everybody involved, all these bought and paid-for politicians. All of a sudden we have the same laws popping up all over the place, US, UK, Australia, Brazil, ... Nobody, not a single person involved gives a fuck about child safety. It's different billion dollar lobbies fighting amongst each other, each with different monetary incentives.
You know what they should do? They should scrap it all, no more "child safety" laws until we kicked money out of politics. Western liberal democracy is in a corruption and legitimacy crisis, this is just it's latest symptom.
> They should scrap it all, no more "child safety" laws until we kicked money out of politics.
the current state has been close to that, and is co-associated to be a related to many existing issues wrt. to children/mental health/child safety (I very intentionally use co-associated instead of correlated, and definitely not root cause)
you could say law makers of many countries have given the industry ~30 years time to self regulate and come up with something acceptable by themself
The industry didn't. Now they have to regulate, it's their job and responsibility to do so :(
(but it's also their responsibility to not listen to highly malicious/biased lobbyist trying to hijack it into surveillance laws!)
honestly to some degree the industry still has a short time frame to fix it themself, provide an acceptable solution which can mostly work internationally (by having localization in it) and pitch that to the EU and US states not having yet decided on age verification laws, so that the few which already have some bad laws are pressured to change course
Through the problem is many non-cooperate entities instead insist it's all nonsense and there is no problem and companies like G, MS, Meta etc. have little interest fixing the situation. A misguided, hard to implement age verification law creates a legal moat to hinder smaller competing companies...
we have seen the same with the EU AI act, it's general outline is very reasonable especially if base that assessment on the corner comments. But thanks to big tech lobbyist hijacking it it became a economical/regulatory moat catastrophe (in the details and the parts which have not yet taking effect, not in every aspect).
What is wrong with parental controls AND parenting?
Nothing. This has never been about protection of children. It is tracking real identity from every source to every destination otherwise known as user-tracking. If this was about protecting children they would require an RTA header on all adult and user-generated content sites and require the most common user agents to look for that header if parental controls are enabled. No tracking, no uploading anything. [1] Sufficient for small children which is more than we have now or will ever have thanks to corporate greed and lobbying.
> It is tracking real identity from every source to every destination otherwise known as user-tracking.
except this is not true at all
yes there are people which try to systematically hijack child protection laws all the time for stuff like that
but e.g. the californium law is very clearly intended to avoid exactly that (that= tracking real identity)
> they would require an RTA header
they are politicians focused on law making, they have no idea what an "header" even is!
A politicians job is to identify issues consult people with expertise, propose a solution based on this people feedback and then listen to feedback, including from other groups. If they need to know what an HTTP header is and how that works something went really wrong.
But this is also where things often do go wrong, by a) dishonest and outright malicious consulting telling politicians bullshit, b) politicians having a over the top simplified understanding of a topic and think that it's still suited for extrapolating things from it leading them to nonsensical outcomes.
And if then large part of the industry which do care about non abusive solutions loudly refuses to provide any solution and denounce anyone trying to do so you are basically opening even more doors for anyone with malicious intentions. Which is pretty much the situation we have now.
Even worse not only do many people in the tech/hacker community not only not try to help with finding an acceptable solution they often outright reject that there is even a problem.
But there is a problem, a huge one even.
As just one dump example of many: it's currently harder for a teenager to get access to some wholesome soft porn then it is to watch potentially traumatizing and definitely not healthy content (weather it's violence, or certain forms of hard core porn(1)), or access sites/apps with gambling, prying on children, hate mongering, glorification of mobbing etc. etc.
And lets not forget most parents are non technical people, which means most of the reasonable usable and privacy protecting existing tools are not actually usable by them (and not available by default, and they can't reasonable evaluate which ones are okay either).
Also please don't say I grew up with a uncontrolled internet (~25-35y) and I am fine. Putting aside that the internet was very different back then. But also hardly anyone in that age range is truly mentally fine (for a lot of reasons, but that anyway makes it a pretty bad argument).
> RTA header
is insufficient, age isn't just 18+ or 13+. Through many media sites love to pretend that is the case
Furthermore this doesn't work for "feed" content as the server needs to know what to filter one before returning content.
But this is also the direction I have proposed in previous comments and not that far away from the direction the Californian law went to (but very much different to the UK law):
- provide a min. age category indicator for all content (most times by app, sometimes per-content in that app, sometimes per-origin per-content in that app (e.g. YT accessed through the browser). But this needs to be more complicated then 13+,18+ as categories differ by country and you should include tags and some other stuff.
- A parent control API which has a simple/naive default impl. but can be replaced with whatever parent think is right.
- A API to get the users age category (incl. localization, e.g. `us:13`). It needs explicit permissions and providers are not allowed to force it, every contents min-age-constraints still have to go through the parent control app. It's only for selecting content feed/preview. The specific content served might still be rejected by the parent controls! Using it for anything else should be made criminal illegal with personal liabilities for executives. (e.g. using it to try to sniff the exact age date of a person). A implementation which just serves `us:18` but then refused anything >13+ or similar must be treated as a legit possibility, the app must still work in general, but it might not have any further previews. Etc. Etc.
- The trust of age hints/evaluation is anchored solely in the parent controls, the setup of the parent controls is the parents responsibility. Any form of identification(2), AI face scans or similar as a requirement for setting up parent controls/not having a permanent 13+ account or similar _is strictly outlawed_.
- All sold products with preinstalled OS must have a default parent control app which is trivially to setup up in it's default setup and the default setup must only reqiore 1. localization (preset to current country if known, changeable), 2. age to auto adapt the age group where alternative the parent can set the age group, even through that means they have to change it in the future manually (needed for special care children). It also in it's default setup must not track/spy on everything the child does.
- Adult accounts still need compatibility with the APIs but will always provide 18+/yes content allowed.
- Products and e.g. downloadable OSes can decide to be "adult only", in which case their access must be guarded like any other adult only content (e.g. when you buy it) but in which case they don't need to support child accounts and can instead return hard coded 18+/yes content allowed.
(This is already the short(er) version :/, e.g. most countries have a 18-21 category, for many countries that category is only irrelevant for things anyway involving a identification (e.g. signing certain contracts, doing certain jobs), but e.g. the US relation to alcohol is an exception).)
---------
(1): And I don't mean just a bit of soft bondage, but things which will lead to serious health issues long term and/or involve violence, glorification of violence, suppression, misogyn, implications of torture, rape, child abuse or in case of drawn/generated content non-implications and even snuff.
(2): There can be some acceptable ways, e.g. a clerk checking your ID IRL, without recording anything except yes/no. Digital ID setups which only communicate adult yes/no without identification etc. But given that all relevant devices tend to be too expensive for children to buy them themself and you also should trust your child if it approaches adulthood (and might have the money) I don't think anything like that is really needed. In general this should focus on efficient solutions for age group <16. IMHO if you still need parent controls for 16+ you messed up parenting.
A API to get the users age category (incl. localization, e.g. `us:13`). It needs explicit permissions and providers are not allowed to force it,
An API actually means that more and more details about the user will inevitably be added with time. This is a user-trackers dream come true. No thanks. One static header, done and dusted.
But this needs to be more complicated then 13+,18+
I will never agree with this nor will most people. Content is either adult or not adult. That is how existing parental laws are structured in most countries. The parent must decide if the child is ready to view content that is rated anything other than "G". The parent decides, not some app, not some API.
A child account on a tablet, phone, laptop need only prevent tampering with browser settings and by default enable parental controls which in turn simply look for an RTA header or any other indicators that the site or content is adult or user-generated in nature. Keep it simple. If people wont enable looking for a header then the only reason they would go far further and screw with an API would be if it were to the benefit of evil. (marketing, sales, manipulation of the child, manipulation of the parent).
> An API actually means that more and more details about the user will inevitably be added with time.
with that logic you could also argue a computer means more APIs will be added over time
especially if it's a law mandated API which doesn't allow any additional thing this really isn't a problem
> I will never agree with this nor will most people
except overall (at least outside no HN bubble) most people would disagree with you, actually the huge majority of parents and children affected by that would disagree
Treating a 16 year old like they are 13 is just completely absurd.
Expecting parent to make decisions about every single peace of content (website, YT, video) their child, even if older, watches without providing any guidance and defaults
is not practical at all and due to that is guaranteed to fail.
To be frank you comment is completely quixotic, lacking any relation to the reality most parents live in today.
Random person jumping in to say, the original comment from 'Bender' is what is agreed with by almost every human I've spoken to. It is most definitely the take of every parent in my social circle, the vast majority of whom are outside of the tech space.
The issue you're describing is strictly one of parenting, and not one that can or should be handled via some government agency. Their (Bender's) suggestion is actually the best that I've seen for handling this issue, and the only one I believe those I know would all happily agree with.
On a side note, this entire comment of yours is very unhinged. I'd wager you're far far far from being aware of the 'reality most parents live in today', based off of what you've said here.
EDIT: Thinking about it more I _guess_ we are more misunderstanding each other then we are fundamentally disagreeing (through I guess we still are disagreeing :) )
---
> I'd wager you're far far far from being aware of the 'reality most parents live in today',
I'm not (EDIT: As in I have enough parents in my live, through there may be larger cultural differences.)
and it's beyond my understanding how anyone can think treating a 13 and a 16 year one alike is a reasonable solution
similar all the things I have proposed gives parents the tooling needed
you make it sound like having an app fully controlled and replaceable by the parents is somehow removing power/choice from them. But nothing in it excludes parents from allowing or disallowing children to watch content from other age categories, potentially on a peace by peace basis
what it does is take the IRL system which isn't perfect but works reasonable good from how we e.g. handle the sale of movies and applies it to the digital world
including the option to ignore it
but we also have to recognize the reality that not all parents bother to even try to properly parent, and others are stressed, overworked and struggle. So having a triviale setup once and get some somewhat reasonable baseline solution is important (and yes it shouldn't be important, but IRL it is anyway)
similar I think it's important to realize that not just 18+ content can be harmful a barely not 18+ horror movie can still be quite traumatizing for some 13 year old. At the same time when children become 16+ you should have build a relation of trust with them where they shouldn't need to tell you or ask you for permission for everything not appropriate for 13 year olds on the internet. But while trust is grate you still would want to do more than that to keep them away from e.g. online gambling and some other sides. Which brings us back to having a baseline which works without spying on your child but still blocks some things off. I don't see how this is supposed to work without a having a 16+ age category between 13/12+ and 18+.
I guess we can probably agree on the fact that most content should only need the content age rating -> you decide (through parent controls) app direction. The OS --api--> Site/App direction is only really needed to serve a feed of "next" content and some other edge cases you could argue aren't in the best interest of children. But also there are better ways to fix those issues (through other means) IMHO. So I personally still would include it. At least for the age range 16+.
When it comes to 16+ I am not concerned about them at all. Sounds cold, no? But in reality 16 year olds have a network of people in their friend circle that can bypass any restrictions anyone sets on them. In my experience the more money spent trying to isolate them from a perceived harm will just make it more likely their circle/bubble/network of friends have already long since bypassed it sometimes out of spite or just to prove they can.
Case in point, games rated G are what many of them use for watching porn, sharing warez and pirated movies and streaming movies/porn together. This is already a thing in many rated-G games especially but not limited to games that use VR headsets, social games and such. Some of the smaller indie games are how some bypass sanctions, embargoes and more. That is just one of many examples of how teen bypass all perceived restrictions. Some small children will see porn in these games but that is a different problem for a different day.
My focus is entirely on small children and their most common use cases. The 99% problem. Keeping the nastier parts of the internet partitioned from small children is mostly accepted by most parents, is the right time to do it before they such as teens know what they are being locked out of. As the child evolves and develops the parent can decide when it is time to lift parental controls and then sit with the child whilst they explore the nastiest of nastiness together. The parent can answer questions instead of waiting until they are young tweens for their tween friends to incorrectly answer questions and start spreading STD's and/or getting impregnated to learn the hard way. Before someone says it, yes tweens are getting pregnant more often because their bodies are developing earlier now due to chemicals they are being exposed to and they are hitting puberty much earlier, some as young as 8 or 9. Some are getting penetrated as young as 6 or 7 and younger. They need to learn from their parents, not random kids their age or random websites or some GPT.
One simple static RTA header set on a load balancer or accelerator or within server applications is done and dusted. It does not get any easier. A check for that header by the user agent or application on a locked down child account to trigger parental controls is also easy. This was a thing in the early 2000's on MSIE and a few other browsers based on MSIE I think SlimBrowser and a few others. An intern could likely add this check in an afternoon not counting Quality Assurance time. No leaking data via API's, no sharing age or any other identifying attributes. If someone is arguing to gather this data I can not take their ideas in good faith because I have worked along side all the nasty people that want this data and I know they have no ethics and will sell this data with all manor of evil people and evil organizations that would be good bed buddies with Epstein and friends. I am unwavering on this belief. I have whipped this dead horse into micronized dust and will continue long after that micronized dust is broken into Quarks and Leptons.
Does it though? Unless all countries unify their laws regarding this matter it will fail in the same way that blacklist filtering does.
Also I'm not convinced that borrowing a device presents a new or different failure mode. Children could always obtain physical contraband from their friends so nothing has changed here.
it solves the problem of it being too trivial for a 12 year old to access content which at best is quite problematic and at worst outright traumatizing
as in, the same reason we have laws that a clerk glances at the age on you id if you look young and buy alcohol but your parent are still allowed to let you drink with them if they think its right (or what a 16+ movie with them etc. etc.)
this is also why it really shouldn't be anything much more fancy then parent controls checking min age of content locally / indication of age for feed fetching. Everything else is disproportional (unrelated from all the other issue it might have).
Ideally, they'd require OS (desktop and mobile) to have an adult mode and a restricted mode (set up by the adults when they buy the device), and then let third-party apps confirm the status (e.g., age) in the latter case. Then you have minimal privacy issues and many parents actually want something like this.
> “Most households aren't going to have a separate device for every family member…”
They want us to all to have user accounts and login like well behaved workers. So cute. Little Donald can login for hisself, and doesn’t need mommy to do it for him.
Apparently there's been work to expose Meta pushing/funding this, to shift age responsibility from them and force for fine grained age data to be provided to apps.
the law is there because parents are fucking clueless unprincipled whining crybabies, who need a lot of support, and sometimes that includes a bit of pushing ...
or who knows what problem is this supposed to fix. orphans buying phones? kids buying secret phones behind their parents back?
I frequently see comments which would have made sense in the past (e.g. early 2000th) but kinda aren't fully reflecting reality anymore
it's as if humans have a tendency to make up their mind/world view in their younger years and then tend to kinda stick with it/only change it slowly as long as no big live changing events happen
Once you get outside the SV and NYC bubbles, the vast majority of kids do not have their own laptops in the US. Phones, obviously are somewhat more common, but as even you note that's mainly with regard to teenagers - the average 10 year old in middle America does not have their own phone.
> Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs.
This is what I don't understand. Why is the article making the assumption that the DC itself is tied to a particular GPU generation? AWS doesn't knock down a building and start over every time Intel releases a new Xeon.
Xeons have a much longer shelf life and diverse workloads. If you order hardware specifically for LLM inference and then some new hardware/model combination is much better at that (which it will be, because a lot of people are working on that), you might be in trouble.
It's like setting up a warehouse of GPUs to mine bitcoin while others are switching to ASICs.
No I mean inference. The idea is that inference demand will be massive and a race to the bottom with razor thin margins.
Training costs can be amortized over the entire lifetime of the model, but if you lose money on inference or can't offer competitive usage limits for subscribers, there's no amortizing that.
No it's all about having the top model first and training time is what's crucial. OpenAI has already shown willingness to bleed money for the sake of brand and we can expect that to continue.
> a CTO of a F100 company explicitly state that whether AI is driving efficiency or not, the capital investment, and more importantly, the promises of efficiency to investors will mean some people will be let go
That seems like an insane gamble to me. Lay off all the workers now and hope that AI can deliver on its promise to replace them some time in the indeterminate future.
At this point it seems the entire AI Safety/Ethics debate was nothing more than a Marketing campaign to hype up the capabilities of the models - get people to think that if they're potentially dangerous that must mean they're so capable and they need to sign up for a subscription.
> the absurd overhiring that they did in 2022 and 2023
The overhiring took place from mid 2020 through mid 2022. The reversal into layoffs started in late 2022 and was in full swing in 2023. While the overhiring problem was real, the correction was largely complete over a year ago. The layoffs we're seeing today have nothing to do with overhiring and everything to do with managing earnings to sustain equity valuations.
> the correction was largely complete over a year ago
I am not sure how to be certain about this case, as the numbers (as far as I remember) still stayed higher than before. Moreover I do not think Blocks fired people back then?
But there is an extra factor, that around covid times hiring became signal for growth thus stocks went up after the hiring rates were announced, now firing is a signal for AI/efficiency, thus stocks go up when they fire people. It becomes easier to mass-fire people when it does not signal that there are problems going on (for the company).
Ads won't go away. They'll just move from infesting websites to infesting AI chatbots.
reply