This was a special/unexpected situation - one of the other passenger jets declared an emergency and needed to evacuate the passengers onto the ground (there were no free gates to return). The firetruck was on it's way to assist with the emergency.
Yeah but why there was not red alert on all the monitors when both the airplane and the truck had green light on the same runway ? That’s the minimum of automation that I would expect, ideally sync-ed to all the participants(truck drivers, pilots etc). It would not have been hard for a system to predict the collision given it had all the data (positions of each participant + the route of each).
That's like the argument about how we'll never (or should never) have self driving cars.
Clearly human-run ATC results in situations like this, so the idea that automated ATC could result in a runway collision and should therefore never be implemented is bad.
It's not an argument for total automation but an argument for machine augmentation. It would be fascinating just as an experiment to feed the audio of the ATC + flight tracks [1] into a bot and see if it could spot that a collision situation had been created.
You obviously wouldn't authorize the bot to do everything, but you could allow it to autonomously call for stops or go-arounds in a situation like this where a matter of a few seconds almost certainly would have made the difference.
Imagine the human controller gives the truck clearance to cross and the bot immediately sees the problem and interrupts with "No, Truck 1 stop, no clearance. JZA 646 pull up and go around." If either message gets through then the collision is avoided, and in case of a false positive, it's a 30 second delay for the truck and a few minutes to circle the plane around and give it a new slot.
I'm not well-enough versed in HMI design or similar concepts, but I think this idea for augmentation could collide with alarm fatigue and the disengaged overseer problem in self-driving cars.
If we aren't confident enough in the automation to allow it to make the call for something simple like a runway incursion/conflict (via total automation), augmentation might be worse than the current approach that calls for 100% awareness by the ATC. Self-driving research shows that at level 2 and level 3, people tune out and need time to get back "in the zone" during a failure of automation.
> could collide with alarm fatigue and the disengaged overseer problem
Depends both on the form the "alarm" takes as well as the false positive rate. If the alarm is simply being told to go around, and if that has the same authority as a human, then it's an inconvenience but there shouldn't be any fatigue. Just frustration at being required to do something unnecessary.
Assuming the false positive rate were something like 1 incident per day at a major airport I don't even think it would result in much frustration. We stop at red lights that aren't really necessary all the time.
Depending on how late the go-around/aborted landing is triggered, that can be a danger in itself. Any unexpected event in the landing flow has a risk, to the point that there's a "sterile cockpit" rule in that window.
Even if it's just a warning to the ATC, distracting them and forcing them to reexamine a false positive call interrupts their flow and airspace awareness. I get what you're saying, that we could err on the side of alert first, out of precaution; but all our proposed solutions would really come down to just how good the false positive and false negative rates are.
BTW, stopping at a red light unnecessarily (or by extension, gunning it to get through a yellow/red light) could get you rear ended or cause a collision. Hard breaking and hard acceleration events are both penalized by insurance driver trackers because of that.
I'm assuming there that any such system would be appropriately tuned not to alert outside of a reasonably safe window. My assumption is that it would promptly notice the conflict following any communication which under ordinary circumstances should leave plenty of time to correct. To be fair I don't expect such a system would address what happened in this case because as you note false alarms on too short a notice pose their own danger which may well prove worse on the whole.
This specific situation I think could instead have been cheaply and easily avoided if the ground vehicle had been carrying a GPS enabled appliance that ingested ADS-B data and displayed for the driver any predicted trajectories in the vicinity that were near the ground. Basically a panel in the vehicle showing where any nearby ADS-B equipped planes were expected to be within the next 30 seconds or so.
> stopping at a red light unnecessarily
Is it not always legally necessary where you live? It certainly is here. When I described them as unnecessary I was recalling situations that would clearly be better served by a flashing yellow.
Yeah, I think there's certainly optimizations possible. Listening to ATC traffic, I'm surprised just how much of the ground ops stuff could be computerized: basically traffic signals for runways.
What you're describing almost sounds like TCAS, a collision avoidance system for planes in the air, and would be a good idea.
As for the redlights, yes, legally you would be required to stop if you're before the stop line. My language wasn't clear, as I was trying to describe those scenarios where a light's turning just as you're getting to/into the intersection. Some people will gun it to get through, others will jump on their brakes to not run what's technically a red.
Valid concern. Ultimately, the ideal would be to have commentary from professionals in the space to say what it is that would be most helpful in terms of augments.
In doctor's offices it was easy, just listen to the verbal consult and write up a summary so doc doesn't spend every evening charting. What is the equivalent for ATC, in terms of an interface that would help surface relevant information, maintain context while multitasking, provide warnings, etc, basically something that is a companion and assistant but not in a way that removes agency from the human decision-maker or leaves them subject to zoning out and losing context so they're not equipped to handle an escalation?
There is such a bot and it is installed in LaGuardia Airport. The system is called Runway Status Lights, and it was supposed to show red lights to the truck. And the truck was supposed to stop and ask the controller: “If an Air Traffic Control clearance is in conflict with the Runway Entrance Lights, do not cross over the red lights. Contact Air Traffic Control and advise that you are stopped due to red lights.” https://www.faa.gov/air_traffic/technology/rwsl
That is how it is supposed to work. How did it work in reality is an other question of course, and no doubt it will be investigated.
> That's like the argument about how we'll never (or should never) have self driving cars.
The reason we won't ever have self-driving cars is that no matter how clever you make them, they're only any good when nothing is going wrong. They cannot anticipate, they can only react, too slowly, and often badly.
They absolutely could anticipate, and arguably with more precision than people. The common occurrence of collisions when making left turns at an intersection shows that people's ability to anticipate is fallible too: people can't even anticipate that car driving towards them will continue to do so.
Self driving cars' reaction times aren't slowed by drugs, alcohol, or a Snapchat notification pulling their attention.
Current systems haven't been proven in all weather conditions and all inclement situations (ie that tesla collision with a white semi-trailer), but it's crazy to say that self-driving cars won't match or exceed human drivers in terms of safe miles driven. Waymo has already shown an 80 to 90% reduction in crashes compared to people.
Can you clarify what you mean by unsafe? From what I can tell from the study, they're comparing to a human benchmark - basically the "average" driver, not a cherrypicked "bad" driver cohort.
Just as with wealth the average is drastically skewed by outliers. I don't recall precise numbers off the top of my head but there are plenty of people who have commuted daily for multiple decades and have never been in a collision. I myself have only ever hit inanimate objects at low speeds (the irony) and have never come anywhere near totaling a vehicle; my seatbelts and airbags have yet to actually do anything. Freight drivers regularly achieve absurd mileage figures without any notable incidents.
As I stated earlier I agree with the broader point you were trying to make. I like what they're doing. It's just important to be clear about what human skill actually looks like in this case - a multimodal distribution that's highly biased by category.
Yeah, I agree with you too. Per IIHS, the fatality rate per 100,000 people ranged from 4.9 in Massachusetts to 24.9 in Mississippi, so clearly there's a huge variance even with "US population".
The other person's comment was "we won't ever have self-driving cars" because they aren't good enough: but something like Waymo already is, particularly for the population. If we waved a wand and replaced everyone's car with a Waymo, accident rates would fall, at a population level and at a per-mile driven level.
It's even tough to see that a Waymo would be more dangerous for a good driver: they too have never been the cause of a serious accident and have certainly driven more miles across the fleet than any human driver. All 4 serious injury accidents and both fatalities were essentially "other driver at fault, hit Waymo".
This isn't meant to glaze Waymo, but point out that self-driving cars in certain environments are "solved". They're expensive, proprietary, aren't suitable for trucking or deployment to cold climates (yet?); but self-driving that is safer than people-driving is already here. To your point: human skill in driving is variable: Waymo won't replace Verstappen right now, but just like the AGI argument with LLMs, they're already "smarter" than the average person in certain domains.
There's exceptions all the time. They turn back because a warning light came on. They saw a deer on the runway, a passenger got up to the bathroom. There's no way that could be automatic, plus they often need atc to look at their jet to see if it's damaged.
My suggestion is to restrict the use of smaller jets like crj and turboprops. I know airports like LaGuardia can't handle the big jets either, but they could reduce the slots and require a jet that holds, say, 150 people or more. This would result in fewer flights per day to some airports, but reduce overall congestion while still serving the same number of passengers.
Imagine it were 90% automated. Now imagine there's a 3 hour outage of the automated system.
You're left with a bunch of planes in the sky that can't stay there forever, and not enough humans on the ground to manually land them.
Now image the outage is also happening at all airports nearby, preventing planes from diverting.
How do you get the planes out of the sky? Not enough humans to do it manually.
Now imagine the system comes back online. Does it know how to handle a crisis scenario where you have dozens of planes overhead, each about to run out of fuel? Hopefully someone thought of that edge case.
Remember when all the Waymos were confused by a power outage? Now do that, but with airplanes that will fall thousands of feet and kill hundreds instead of park in the middle of the street.
I'm not saying we shouldn't automate things. We should. But, it's not easy. If it was, we would have done it already.
I think the point they're making is that the failure mode of a waymo and automated air traffic control could look the same from an angle, but would have very different consequences.
That's what everyone screaming 'funding' doesn't seem to understand here. If your failure mode for potentially hundreds of people dying is one controller over radio forgetting something, then it'll happen eventually. And has happened, there's plenty of videos on youtube of near miss radio recordings. When a plan is landing at over 100mph simple good luck can take care of things the majority of the time.
It just feels wrong that the primary form of control in 2026 is voice over radio.
> Now imagine there's a 3 hour outage of the automated system.
Planes divert to another airport, passengers grumble, end of story. Airport closures can and do happen all the time for all kinds of reasons, including weather or equipment malfunctions.
Speaking of runway crossings specifically, you could have an automated backup, and require authorization from both ATC and the automated system to enter a runway.
We build pacemakers, AEDs, flight control software, and other mission-critical life-and-death software. The idea that we'll just forever keep the system run by specially trained humans with known and foreseeable faults because poorly designed software could fail is head-in-sand unreasonable.
Look what happened when the power went out in SF and the Waymos just stopped in the street because they were confused and there weren’t enough humans to direct them. Now imagine that but with planes that will fall out of the sky when they run out of fuel since they can’t land. Automating this is pants on head retarded.
That sounds like a poorly thought-out implementation.
An example of a poorly thought-out implementation elsewhere does not exclude the possibility of coming up with a better one than humans coordinating with their mouths over radio.
You can launch a new product in one month instead of 12 months. I think this works best for startups where the risk tolerance is high but works less than ideal for companies such Amazon where system failure has high costs
Most startups fail because of no market or money issues, not because of product issues. You can build a wonderful cathedral and pray an empty god in a wonderful desert. Then someone cuts oil supply and the cathedral is not aor conditioned anymore. So go ahead, build 50 products, hope that 1 succeeds. My bet is that 50 will fail and 1 person in 1000 will succeed. But everyone will spend so much money on Nvidia and Anthropic that they will eradicate whatever else is in the world.
They're here, I made one. Not a toy or vibecoded crap, people got immediate value. Not planning to doxx myself by linking it. This was more than a year ago when models weren't even as good yet. A year later it has thousands of consistent monthly users, and it only keeps growing. It's nothing compared to VC startups but for a solo dev, made in a month? Again, it's not a toy, it offers new functionality that simply didn't exist yet and it improves people's lives. The reality is that there's no chance I would've done it without LLMs.
Real ones don't exist. Conveniently, nobody that has claimed to have created a functional AI product is willing to "doxx themselves" by linking to their app.
Weirdly, people who have actually created functional one-man products don't seem to have the same problem, as they welcome the business.
The main issue was the content the movie industry produced which looked like a lot like some AI slop. I think the DEI lecturing was another nail in the coffin. Unless that changes and they magically add something new to the cinema experience I think they will keep diving into irrelevance because now everybody can produce AI slop.
Do people even want their culture democratized with just anyone being able to produce high entertainment? The recent popularity of "Harry Potter by Balenciaga (2026)" AI fashion parody retelling shows we might actually be stuck in this cultural rut forever with or without AI help.
Why can’t we have original stories(I.e Sinners, Black Panther, White Lotus) and instead try to make White Snow black or whatever and paint people racist?
I get that people want to make these stories more inclusive but the characters are just too powerful to change them and not expect a backlash.
People prefer new characters or “staying true to the original story/character” instead of lazy remixes.
Or you can create your own remix with AI now. Feed the movie and make the characters as you want. The result may seem a bit weird(i.e like a Snow White being a fat black gay male) for a fan of the movie/character but for you as an activist it will check the marks you are looking for. Just don’t invoke racism if people won’t like your AI slop.
Starfire is an orange alien from Tamarin in the comics and people said she shouldn’t be Black in live action or the green witch in Wicked.
Should Annie also not have been Black in the remake? If we want to stay “true” to the original stories, we have to only have Black characters as freed slaves shucking and jiving singing “zippity do dah”.
There are schools located on US military bases, not just near. That doesn't make them viable targets. You better believe that if an attacker hit a school on a US base, the soldiers wouldn't forgive that so easily.
Well that’s not very smart IMHO. To me they look like human shields. Shame on me as I was blaming Hamas for using residential compounds for military ops but it looks like everyone does it.
I wouldn’t send my kid to a school on military base especially in times of imminent war. I label that gross negligence and even provocation.
The problem is that the soldier parents want their kids to live with them on the base. The alternative is they have their kids live in an orphanage somewhere else until they decide to retire, or soldiers just aren't allowed to have kids at all. Neither is very realistic, so there are schools on bases.
Orphanage ? What about a school 2 miles away from the military installation or in Iran's given case why don't they move their sh** at the edge of the city like the shopping malls do? It may be a bit inconvenient, I get that it's inconvenient but it's far from the orphanage story. Not to mention in case of war...should the kids be kept close to a military base? Military installations within residential areas just beg for civilian casualties. Is as simple as that. If you see a military base near you move out or ask them to move out.
Usually they serve military families, but at least in the United States those kids probably aren't any safer from getting killed in an off-base school given how common school shootings are now.
I don't believe that - in my opinion, the school was deliberately targeted because the students studying there were mostly the children of Iranian military officials. Iran's military (surprisingly) has behaved in a very restrained manner in the last 2 years against Israel (and US), possibly on the advise of Russia and China, and that is why Israel and US have not been able to galvanise much international support for their aggression against Iran. The deliberate assassination of the Ayatollah (a religious muslim leader, who was 87+ years and soon to be replaced by the Iranians themselves) and targeted slaughter of the children of Iranian military officials is meant to provoke Iranians and Shia muslims elsewhere to commit acts of terrorism against the US and Israel. Then international outrage can be whipped up by the western media and NATO can be bulldozed to join the war and send soldiers into Iran.
The children of soldiers are not legitimate military targets.
> ... in my opinion, the school was deliberately targeted because the students studying there were mostly the children of Iranian military officials. ...
Your opinion is wrong. There is no possibility of that being the justification for choosing a target. The American armed forces are too professional to do such a thing. Terror is not in our toolbox.
Americans and Europeans are in general, good people. But their political leaders, not so much. And this war is being run by a genocidal regime in Israel and the Trump administration. Moral values are the least of their concern ... (Also, I suspect the Israel regime of being the brains behind this attack. Hegeseth, the current Secretary of "war" is also a known muslim-hater, who wouldn't have been hard to persuade.).
In this case they were using very expensive munitions which go exactly where they were targeted to go; they were not using cheap, dump bombs which have a wide margin of error.
It’s not only about the precision of the munition. When you put military installations in residential areas you get this kind of result regardless of how precise the weapons are.
The maps could be outdated, intelligence may be flawed etc. In a hot war collateral casualties are secondary to the military objective. You try to avoid civilian deaths but that should be on best effort basis.
This is the right direction. Another important bit I think it’s the GC integration. Many languages such Go, C# don’t do well on wasm due the GC. They have to ship a GC as well due the lack of various GC features(I.e interior pointers)
That's an orthogonal problem. First it needs to be possible and straightforward to write GCed languages in the sandbox. Second, GCed languages need to be willing to fit with the web/WASM GC model, which may not exactly match their own GC and which won't use their own GC. And after that, languages with runtimes could start trying to figure out how they might reduce the overhead of having a runtime.
I think it'd be supported by them the moment they ship it. Whether others will be excited to use it is an open question. There's no central registry of "languages supported for WebAssembly", by design; it supports any language that can compile to standards-compliant WebAssembly.
WasmGC doesn't support interior pointers, and is quite primitive in available set of operations, this is quite relevant if you care about performance, as it would be a regression in many languages, hence why it has largely been ignored, other than the runtimes that were part of the announcement.
In java land the fact that you effectively don't have pointers but rather everything is an object reference, this ends up not being an issue.
I wonder if the WASM limitation is related to the fact that JavaScript has pretty similar semantics with no real concept of a "pointer". It means to get that interior pointer, you'd need to also introduce that concept into the GC of browsers which might be a bit harder since it'd only be for WASM.
Object references are pointers. WasmGC only supports pointers which point to the start of an object. However, some languages have features which require pointers that point inside of an object while still keeping that object alive.
Limiting WASM to what is capable in JavaScript is quite a silly thing to do. But at the same time there are vastly different GC requirements between runtimes so it's a challenging issue. Interior pointers is only one issue!
I know this is pedantic, but they aren't. At least not in the sense of what it means for something to be a pointer.
Object references are an identifier of an object and not a memory pointer. The runtime takes those object references and converts them into actual memory addresses. It has to do that because the position of the object in memory (potentially) changes every time a GC runs.
This does present it's own problems. Different runtimes make different choices around this. Go and python do not move objects in memory. As a result, it's a lot easier for them to support interior and regular pointers being actual pointers. But that also means they are slower to allocate, free, and they have memory fragmentation issues.
I'm not sure about C# (the only other language I saw with interior pointers). I think C# semi-recently switched over to a moving collector. In which case, I'm curious to know how they solved the interior pointer problem.
> I'm not sure about C# (the only other language I saw with interior pointers). I think C# semi-recently switched over to a moving collector. In which case, I'm curious to know how they solved the interior pointer problem.
Objects references are just pointers in .NET. See the JIT disassembly below. It's been using a moving GC for a long time, too.
Interesting. I wonder how C# handles the moving. I'm guessing it has to go in and fix up the pointers in the stack after a gc run? Or is this some OS level virtual pointer weirdness going on? How does C# guard against someone doing something silly like turning a pointer into a long and then back into a pointer again later?
The runtime knows exactly where GC pointers are so I would assume that is what it does. It even knows precisely when locals are no longer needed so it can stop treating objects they refer to as reachable. It's instruction level, not based on scopes, so an object can be freed while it is still in scope if the code doesn't access it!
> How does C# guard against someone doing something silly like turning a pointer into a long and then back into a pointer again later?
I don't think it does. You can't do most of these things without using unsafe code, which needs a compiler flag enabled and code regions marked as `unsafe`.
Why would you buy the old gen GPUs instead to commit to buy the newest and best GPU available in 2 years? anyone knows that electronics depreciate fast. Unless they get them at discount this is really stupid. It’s like buying the best TV or iPhone at full price and keep it in storage for 2+ years.
Well by the time the become obsolete you can run that computing on a Mac with no special cooling so I really doubt they will be of any use. Maybe in some parts of the world where electricity is cheap. If someone wants to really find out perhaps watching the crypto ASICs stories could help.
reply