It's either a a Career Limiting Event, or a Career Learning event.
In the case of a Learning event, you keep your job, and take the time to make the environment more resilient to this kind of issue.
In the case of a Limiting event, you lose your job, and get hired somewhere else for significantly better pay, and make the new environment more resilient to this kind of issue.
Realistically, there’s a third option which it would be glib to not consider: you lose your job, get hired somewhere else, and screw up in some novel and highly avoidable way because deep down you aren’t as diligent or detail-oriented as you think you are.
In the average real world, the staff engineer learns nothing, regardless of whether they get to lose or keep their job. Some time down the line, they make other careless mistakes. Eventually they retire, having learned nothing.
I was able to run some stats at scale on this and people who make mistakes are more likely to make more mistakes, not less. Essentially sampling from a distribution of a propensity for mistakes and this dominated any sign of learning from mistakes. Someone who repeatedly makes mistakes is not repeatedly learning, they are accident prone.
My impression of mistakes was that they were an indicator of someone who was doing a lot of work. They're not necessarily making mistakes at a higher rate per unit of work, they just do more of both per unit of time.
From that perspective, it makes sense that the people who made the most mistakes in the past will also make the most mistakes in the future, but it's only because the people who did the most work in the past will do the most work in the future.
If you fire everyone who makes mistakes you'll be left only with the people who never make anything at all.
In this case it was trivial to normalize for work done.
It’s very human to want to be forgiving of mistakes, after all who has not made any mistakes, but there are different classes of mistakes made by all different types of people. If you make a mistake you are the same type of person, but if you are pulling from a distribution by sampling by those who have made mistakes you are biasing your sample in favor of those prone to making such mistakes. In my experience any effect of learning is much smaller than this initial bias.
A decade of data from many hundreds of people, help desk type roll where all communication was kept, mostly chat logs and emails. Machine learning with manual validation. The goal was to put a dollar figure on mistakes made since the customers were much more likely to quit and never come back if it was our fault, but also many customers are nothing but a constant pain in the ass so it was important to distinguish who was right whenever there was a conflict.
Mistakes made per call, like many things, were on a Pareto distribution, so 90% of the mistakes are made by 10% of the people. Identifying and firing those 10% made a huge difference. Some of the ‘mistakes’ were actually a result of corruption and they had management backing as management was enriching themselves at the cost of the company (a pretty common problem) so the initiative was killed after the first round.
This sounds really interesting but possibly qualitatively different than programming/engineering where automated improvements/iterations are part of the job (and what's rewarded)
What if you define a hard rule from this statistics that « you must fire anyone on error one »? Won’t your company be empty in a rather short timeframe?
[or will be composed only of doingNothing people?]
Why would you do that? You’re sampling from a distribution, a single sample only carries a small amount of information, repeat samples compound though.
Ceph storage uses a hierarchical consistent hashing scheme called "CRUSH" to handle hierarchical data placement and replication across failure domains. Given an object ID, its location can be calculated, and the expected service queried.
As a side effect, it's possible to define a logical topology that reflects the physical layout, spreading data across hosts, racks, or by other arbitrary criteria. Things are exactly where you expect them to be, and there's very little searching involved. Combined with a consistent view of the cluster state, this avoids the need for centralized lookups.
Depends on the setup, but programmatic access to a Gmail account that's used for admin purposes would allow for hijacking via key/password exfiltration of anything in the mailbox, sending unattended approvals, and autonomous conversations with third parties that aren't on the lookout for impersonation. In the average case, the address book would probably get scraped and the account would be used to blast spam to the rest of the internet.
Moving further, if the OAuth Token confers access to the rest of a user's Google suite, any information in Drive can be compromised. If the token has broader access to a Google Workspace account, there's room for inspecting, modifying, and destroying important information belonging to multiple users. If it's got admin privileges, a third party can start making changes to the org's configuration at large, sending spam from the domain to tank its reputation while earning a quick buck, or engage in phishing on internal users.
The next step would be racking up bills in Google's Cloud, but that's hopefully locked behind a different token. All the same, a bit of lateral movement goes a long way ;)
This looks interesting! I've been building a similar tool that uses TreeSitter to follow changes to AST contents across git commits, with the addition of tying the node state to items in another codebase. In short, if something changes upstream, the corresponding downstream functionality can be flagged for review.
The ultimate goal is to simplify the building and maintenance of a port of an actively-maintained codebase or specification by avoiding the need to know how every last upstream change corresponds to the downstream.
Just from an initial peek at the repo, I might have to take a look at how the author is processing their TreeSitter grammars -- writing the queries by hand is a bit of a slow process. I'm sure there are other good ideas in there too, and Diffsitter looks like it'd be perfect for displaying the actual semantic changes.
I'm guessing it'd look something like this on a 1-dimensional number line:
--- > | > >> . << < | < ---
The dot in the middle would be the singularity, the pipes the event horizon, and the contents would be increasingly warped spacetime that may or may not exist, depending on your interpretation of things.
I think it's an interesting thought experiment. What would happen if the stock market were quantized to a blind one trade per-minute granularity?
I suspect this would put everyone on more even footing, with less focus on beating causality and light lag, placing more focus on using the acquired information to make longer-term decisions. This would open things up to anyone with a computer and a disposable income, though it would disappoint anyone in the high-frequency trading field.
> What would happen if the stock market were quantized to a blind one trade per-minute granularity?
Like one share of stock trades each minute in each name? Or one trade randomly executes?
If the former, you stop trading the stock and start trading something pointing at it. If the latter, the rich get to trade.
> less focus on beating causality and light lag
You’d have to ban cancelling orders, otherwise you bid and offer and then cancel at the last minute. Either way, you’d be constantly calculating the “true” price while the market lags and settling economic transactions on that basis. (My guess is the street would settle on a convention for the interauction model price.)
If you’re upset about stock markets looking like casinos, the problem isn’t the fast trading. It’s the transparency. Just don’t report trades until the end of the day.
If you aesthetically don’t like HFT, that’s a tougher problem as the price of the stock points at something tied to reality, and reality runs real time.
It has the same utility as in the opening cross, the most algorithmically-trafficked moments of trading after the closing cross. The last order can incorporate more information than an earlier one. Given the book is assembled transparently, that means an order submitted close to the deadline can “see” other orders in a way they couldn’t “see” it.
You would change the rules, but I think the result would largely remain the same. As a market participant with the fastest access to data from other markets, news, and similar sources, as well as low order entry latency, you would still be able to profit from information asymmetry.
Imagine that a company announces the approval of its new vaccine a few milliseconds before the periodic trade occurs. As an HFT firm, you have the technology to enter, cancel, or modify your orders before the periodic auction takes place, while less sophisticated players remain oblivious to what just happened. The same applies to price movements on venues trading the same instrument, its derivatives, or even correlated assets in different parts of the world.
On the other hand, you risk increasing price volatility (especially in cases where there is an imbalance between buyers and sellers during the periodic auction) and making markets less liquid.
In the case of a Learning event, you keep your job, and take the time to make the environment more resilient to this kind of issue.
In the case of a Limiting event, you lose your job, and get hired somewhere else for significantly better pay, and make the new environment more resilient to this kind of issue.
Hopefully the Wikimedia foundation is the former.