I had chronic pain in various parts of my feet for years from fairly tame activities (biking 20mi/day, hiking 10-20mi Saturday and Sunday, etc). I'd been fairly conscientious about "good" shoes that fit well, and it didn't make a difference. My in-laws had me go to a running shop, and the founder studied my gait for a bit and picked out shoes which would help. A month or two later, all the pain finally disappeared, and I haven't had issues in years.
That's just an n=1 anecdote, but years of pain followed by years of non-pain with a single, obvious intervention in between seems like a reasonably strong signal.
Assuming I'm not reading too much into my experience, if you're feeling fine I think your strategy probably works, and my only concern might be long-term damage you're not recognizing immediately. Other people will be more knowledgeable as to how you'd test that, but if you're comfortable and not injuring yourself then I don't think you're missing out on anything.
That sounds a lot like my experience as an Apple Developer too, with the added bonus (unclear from your description if you experienced this too) that they took my money before the verification process was finished and wouldn't refund it once their AI couldn't connect my face to my ID and wouldn't let me connect with a real person (the first dozen times were on them, but after that it was maybe my fault for including a middle finger in the photographs).
Going through hell with Apple Developer too. I didn't have to do much in terms of verification (probably because I created an account as an individual) but app submission is another story:
- first time I got rejected for mentioning a name of a third party in my app description. The app description said: DISCLAIMER: not affiliated with xxx
- after fixing the app description I got rejected for using my app name(?!), multiple back and forths with the reviewer got me nowhere, they just copy pasted the same response not addressing my messages at all
- filled the app store review board appeal, it's been 5 days and I've got no response.
At this point I'm seriously considering rewriting the app for MacOS and distributing myself. I can't imagine going through all of this with every app update, it's beyond ridiculous.
Lieutenant Appleby rejected my submission almost immediately. The notice informed me that I had committed the grave offense of impersonating a third party in the description.
"I didn't impersonate a third party," I explained in my message to Lieutenant Appleby. "I only wrote a disclaimer stating: Not affiliated with ACME."
"Exactly," lieutenant appleby replied. "By stating you have nothing to do with ACME, you have involved ACME. Therefore, you are unlawfully impersonating an unaffiliated party."
"But I only mentioned them to prove I wasn't affiliated with them!"
"Which is a violation," Lieutenant Appleby pointed out.
It was a Catch-22. The Guidelines stated that to prove you were not affiliated with a third party, you had to write a disclaimer. But to write the disclaimer, you had to type the third party’s name, which was a strict violation of the rule against mentioning third parties you were not affiliated with.
I deleted the disclaimer, thereby making myself safely affiliated with nobody by refusing to acknowledge anyone. I resubmitted the app.
Lieutenant Appleby rejected it again.
"What is it this time?" I asked.
"You are using your app's name," Lieutenant Appleby replied.
"Of course I am using my app's name," I replied back. "It is the name of my app."
"You cannot use that name. It is trademark infringement."
"Infringing on whose trademark?"
"The app's."
"But I am the app! It is my app!"
"Which is exactly why you cannot use it," Lieutenant Appleby wrote patiently. "If you use the app's name, you are impersonating the app. And impersonation is strictly forbidden by the Guidelines. An app cannot go around pretending to be itself!"
At this point, my phone is PDA level, mostly useful for quick checks. I use a laptop for computing. I know as a tech nerd, I’m far out of the bell curve, but I can’t really bother with those shenanigans unless they’re paying me for it.
Develop only Web applications, that are mobile friendly, notice I said mobile friendly, not PWA.
However, thanks to many of us that only favour Chrome like IE of yore, and ship it alongside their "native" applications, the Web is nowadays ChromeOS Application Platform, so we are only a couple of years away of Google owning that as well.
You might not have a way to actually file a complaint against them but quite often, their legal department will just have a quick look at your case and just give you what you want without bothering to tell you anything. Worth a shot.
I am doing leatherworking as well as woodworking. No idea if it is possible to actually make money with this¹, but damned if I'm not giving it a go just to have skills in an area where AI is not a threat for the coming decade. At the very least these crafts allow me to make things which do not exist and cannot be purchased off the shelf.
1: I mean, it is, certainly. I'm just not sure if I can make money by making leather gear.
If you are in EU you could try complaining to your local DPA. That certainly sounds like "automated decision which produces legal effects concerning him or her or similarly significantly affects him or her" which is against article 22 of GDPR. Or you could consider suing them directly at least for the refund.
Outside of EU maybe try passing law like GDPR to actually get some rights back.
If you perform nearly any work at all in a given week you're entitled to your salary, and they can't fire you. They might be able to take away the $15/day stipend from your pay, and there are obvious additional negatives (6 months with limited context and practice of your craft will reduce your performance when you get back too), but that 2-week cap is a lawsuit waiting to happen unless they also forbid you from doing any work while on jury duty.
As I say grand jury duty is often not every day, you can always take your PTO, and there are always nights and weekends. A company can always keep paying your base salary but, as you say, there could be longer term consequences.
And the case upthread is obviously a retail manager being stupid but I also assume there is no obligation to pay hourly employees for hours they don’t work or for tips they didn’t collect.
You can, but if salaried you usually shouldn't, ignoring any particularly malicious employers and social contracts around the outskirts of the law.
> No obligation to pay hourly employees, tips, etc
Yeah, if you're not salaried you're screwed. PTO might cover a few days, but if you have a month-long trial and need money for rent then my understanding of the law is that serving as a juror will make you homeless unless the courtroom is willing to extend some compassion for your hardship.
Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations.
I was talking about building code not run-time. But regarding run-time, no other language does outperform C in practice, although your argument about "better semantics" has some grain of truth in it, it does not apply to any existing language I know of - at least not to Rust which is in practice for the most part still slower than C.
On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors.
Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.
I made no judgement about whether Ada is subjectively "bad" or not. I used it for a single side project many years ago, and didn't like it.
But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.
So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.
C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.
My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.
It's probably just a higher rate of autonomous vehicles needing stop signs and buses identified at that moment, and cognitive bias causes you to only remember when that happens when you recently performed an update. /s
>It's probably just a higher rate of autonomous vehicles needing stop signs and buses identified at that moment
I can't tell whether you're serious but in case you are, this theory immediately falls apart when you realize waymo operates at night but there aren't any night photos.
My assumption is that CF has something like a SVM that it's feeding a bunch of datapoints into for bot detection. Go over some threshold and you end up in the CAPTCHA jail.
I'm certain the User-Agent is part of it. I know that for certain because a very reliable way I can trigger the CF stuff is this plugin with the wrong browser selected [1].
I mostly agree, but it's more appropriate to weigh contributions against an FTE's output rather than their input. If I have a $10m/yr feature I'm fleshing out now and a few more lined up afterward, it's often not worth the time to properly handle any minor $300k/yr boondoggle. It's only worth comparing to an FTE's fully loaded cost when you're actually able to hire to fix it, and that's trickier since it takes time away from the core team producing those actually valuable features and tends to result in slower progress from large-team overhead even after onboarding. Plus, even if you could hire to fix it, wouldn't you want them to work on those more valuable features first?
They were running a big kubernetes infrastructure to handle all of these RPC calls.
That takes a lot of engineer hours to set up and maintain. This architecture didn't just happen, it took a lot of FTE hours to get it working and keep it that way.
Kube is trivial to run. You hit a few switches on GKE/EKS and then a few simple configs. It doesn't take very many engineer hours to run. Infrastructure these days is trivial to operate. As an example, I run a datacenter cluster myself for a micro-SaaS in the process of SOC2 Type 2 compliance. The infra itself is pretty reliable. I had to run some power-kill sims before I traveled and it came back A+. With GKE/EKS this is even easier.
Over the years of running these I think the key is to keep the cluster config manual and then you just deploy your YAMLs from a repo with hydration of secrets or whatever.
The cost is not just tokens, you need an actual human contributor looking into the issue, prompting, checking output, validating, deploying,... Difficult to compute the actual AI ROI. If $300K didn't matter without AI, it probably still doesn't matter with AI.
That reminds me of one of the easiest big wins I've had in my career. SystemD was causing issues, so I slapped in Gentoo with the real-time kernel patch. Peak latency (practically speaking, the only core metric we cared about -- some control loop doing a bunch of expensive math and interacting with real hardware) went down 5000x.
That specific advice isn't terribly transferable (you might choose to hack up SystemD or some other components instead, maybe even the problem definition itself), but the general idea of measuring and tuning the system running your code is solid.
What do you think is causing the issue? We are having the same kind of problem. Core isolation, no_hz, core pinning, but i am still getting interrupted by nmi interrupts
Details depend, but the root cause is basically the same every time: your hardware is designed to do something other than what you want it to do. It might be close enough that you want to give it a shot anyway (often works, often doesn't), but solutions can be outside of the realm of what's suitable for a "prod-ready" service.
If you're experiencing NMIs, the solution is simple if you don't care about the consequences; find them and remove them (ideally starting by finding what's generating them and verifying you don't need it). Disable the NMI watchdog, disable the PMU, disable PCIe Error Reporting (probably check dmesg and friends first to ensure your hardware is behaving correctly and fix that if not), disable anything related to NMIs at the BIOS/UEFI/IPMI/BMC layers, register a kernel module to swallow any you missed in your crusade, and patch the do_nmi() implementation with something sane for your use case in your custom kernel (there be dragons here, those NMIs obviously exist for a reason). It's probably easier to start from the ground up adding a minimal set of software for your system to run than to trim it back down, but either option is fine.
Are you experiencing NMIs though? You might want to take a peek at hwlatdetect and check for SMIs or other driver/firmware issues, fixing those as you find them.
It's probably also worth double-checking that you don't have any hard or soft IRQs being scheduled on your "isolated" core, that no RCU housekeeping is happening, etc. Make sure you pre-fault all the memory your software uses, no other core maps memory or changes page tables, power scaling is disabled (at least the deep C-states), you're not running workloads prone to thermal issues (1000W+ in a single chip is a lot of power, and it doesn't take much full-throttle AVX512 to heat it up), you don't have automatic updates of anything (especially not microcode or timekeeping), etc.
Also, generally speaking, your hardware can't actually multiplex most workloads without side effects. Abstractions letting you pretend otherwise are making compromises somewhere. Are devices you don't care about creating interrupts? That's a problem. Are programs you don't care about causing cache flushes? That's a problem. And so on. Strip the system back down to the bare minimum necessary to do whatever it is you want to do.
As to what SystemD is doing in particular? I dunno, probably something with timer updates, microcode updates, configuring thermals and power management some way I don't like, etc. I took the easy route and just installed something sufficiently minimalish and washed my hands of it. We went from major problems to zero problems instantly and never had to worry about DMA latency again.
In the academic circles I frequent, it's not true. Any one journal might reject the good stuff, but it doesn't take more than a few applications to find a journal who recognizes it, and the cost of producing the research is so high that with the current career incentives it'd be ridiculous not to continue submitting. That does mean that journal "quality" matters less than you might think, but I don't think anyone's surprised by that notion either.
Errors the other direction are more common. I'll state that as an easily verified fact, but people like fun stories, so here's an example:
One professor I worked with had me write up a bunch of case studies of some math technique, tried to convince me that it was worth a paper, paid somebody else to typeset my work, and told me to compensate him if I wanted my name on the "paper." I didn't really; it was beneath any real mathematician; but there now exists some journal which has a bastardized, plagiarized version of my work with some other unrelated author tacked on available for the world to see [0], and it's worth calling out that nothing about the "paper" is journal-worthy. It's far too easy to find a home for academic slop, and I saw that in every field I spent any serious amount of time in.
That's just an n=1 anecdote, but years of pain followed by years of non-pain with a single, obvious intervention in between seems like a reasonably strong signal.
Assuming I'm not reading too much into my experience, if you're feeling fine I think your strategy probably works, and my only concern might be long-term damage you're not recognizing immediately. Other people will be more knowledgeable as to how you'd test that, but if you're comfortable and not injuring yourself then I don't think you're missing out on anything.
reply