No, according to everything I've read before, the parent post was correct and you're not. This article clearly says "art generated by artificial intelligence without human input cannot be copyrighted under U.S. law":
Reading the linked Court of Appeals document in that post, the question is posted in the opening: "Can a non-human machine be an author under the Copyright Act of 1976?", which it then answers as no. It doesn't mention elsewhere that i can see that this means the output of the tool itself is not copyrightable. I would not trust the Reuters interpretation without a direct reference to a court document.
Could the prompt used to generate the art be considered human input, or is it that a human must to make some contribution to the art for it to be copyrightable?
Even CSS has clamp. I have to periodically remind myself that Math.clamp is missing from the stdlib, it's much easier to read when you need to use a bunch. Math.min(lower, Math.max(value, upper)) is fine for a one-off, but when you need multiple instances of clamp in the same expression the extra parentheses become troublesome.
I do find it strange how Python's min and max are builtin and don't require an import, but 'clamp' isn't provided at all. When I look at https://stackoverflow.com/questions/4092528 I feel the lack of an "obvious way to do it", since the simple approach has variant spellings and there are always people trying to be clever about it.
No, ICE did not kill those people. I looked through the latest six this year. Two were suicides (one suicide was of a man who had state charges against him for several crimes including child molestation), one was someone who had diabetes and refused to take insulin, and the others seem to have had other health issues. They got medical care many different times.
I think it is misleading to conflate murder with people dying of health issues in detention after medical care.
I think it's also misleading to call it people dying of health issues. But after years of knowing, under multiple administrations, that even the pre-Trump ICE detention regime killed detainees due to medical assessment delayed and care denied [0], the weight of the evidence points currently points to ICE being malicious, not ignorant: ICE currently knowingly detaining medically frail individuals, without care corresponding to their needs, knowing that a random subset would die due to circumstances that ICE could have chosen to change, but didn't.
Therefore, I think that what is happening does rise to extrajudicial killing - killing that ICE chose not to prevent but to maintain; and inevitable killing without any corresponding sentence.
Forgive me for not taking ICE at face value. I looked through the next four accounts – assuming that, at that point there would be sufficient independent reporting that would either complement or contradict ICE's accounts.
The next four individuals died preventable deaths due to care ignored (e.g. in the case of Nhon Nguyen, who was detained with dementia), or denied (e.g. in the case of Maksym Chernyak, who was unconscious after fainting for hours until detention guards provided medical attention too late.)
- Marie Ange Blaise's death (#7) was blamed by ICE on blood pressure medication noncompliance. The narrative stitched together from Broward County medical examiner reporting, along with detainee testimony, instead argues that she fainted after taking blood pressure medications, and it took at least 8 minutes for medical attention to arrive (after a guard walked away) [1].
- Nhon Nguyen (#8) was, according to his family, detained while living with advanced dementia, and according his death report, bounced backwards and forwards between hospitals and his detention processing center before dying of avoidable pneumonia [2].
- Brayan Garzón-Rayo (#9) died by suicide after repeatedly being denied a mental health evaluation - once due to short-staffing, next due to contracting COVID-19. [3]
- Maksym Chernyak (#10) fainted - possibly due to overdose - but was denied care for hours despite attempts by others detained with him to draw attention; his death was attributed to a stroke. [4]
But sunscreen doesn't stop all damage to skin. I spent weekdays working inside on a computer, then sometimes spent summer weekends outside in the sun. I get sunburned easily, sometimes in like 10 minutes of direct sun. You wouldn't try to deny a light sunburn isn't skin damage? SPF 50 suncreen, blocking 98% of sun, extends the 10 minutes by 50x to 8.3 hours, but that is still not that great. I can still exceed that in two days. And I don't see why having light skin and wanting to spend the weekend outside would be unusual. Blocking 99% of UV and doubling the time over 98% would help quite a bit.
That isn't how sunscreen works. If you put SPF 50 on and spend 8 hours in the sun you're coming back a lobster.
Say you burn in 5 minutes. SPF 50 means you burn in 250 minutes. But it's more like 100% protection for 245 minutes and then 0% for the last 5. It's not a steady cooking at 2.5% intensity.
Got any source for that? Everything I can find (and the intuitive explanation of it) points to the opposite: SPF is how much of the UV blocks, not at all how long the sunscreen stays on your skin (which varies wildly with what you're doing).
That's my entire point. The way they generate SPF measures how much of the sun it blocks in the lab shortly after it's applied. That one blocks 97.5% and another 98% is meaningless for the real world.
More recent processors can do more with the same power than older processors, but I think for the most part that doesn't matter. Most people don't keep their processor at 100% usage a lot anyway.
As I said in a sister comment here, you can't compare CPUs by TDP. No one runs their CPU flat-out all the time on a PC. Idle power is the important metric.
"it certainly will eventually" - I think even that is underselling how unlikely it is for Git 256-bit hashes to collide. I calculated (taking into account the birthday paradox), that even if 8 billion people on Earth each created a Git commit every second, that they would have to do that non-stop for 1,588,059,911 trillion years before there's a 50% chance that any of the two commits have the same hash. Our sun is predicted to only last 0.005 trillion years more.
UUIDs are more likely to collide but still basically impossible. You'd have to generate √(2^122)×1.1774 = 2714899559048203259 to have 50% chance of a collision. Just to store the UUIDs for that database would take 39,506,988 TB of space. If you aren't thinking about your database not fitting on millions of drives, don't think about UUID collisions.
The court decided the opposite--that APIs are copyrightable. However, the Supreme Court ruled that Google's usage was fair use, so I would agree that Google mostly won. The Supreme Court didn't consider whether APIs are copyrightable (the lower court ruled that) because Google would win regardless because it was fair use.
So I'm not sure it matters much whether APIs are copyrightable when what Google did was ruled fair use. I'd prefer if the courts ruled APIs weren't copyrightable, but I think it was still a good result because doing what Google did probably covers about any use case anyway.
I wonder if making a device which uses Threads could be considered fair use in the same way, because implementing threads is required for interoperability with many devices.
I'm interested in using LDPC (or Turbo Codes) in software for error correction, but most of the resources I've found only cover soft-decision LDPC. When I've found LDPC papers, it's hard for me to know how efficient the algorithms are and whether it's worth spending time on them. Reed-Solomon has more learning resources that are often more approachable (plus open source libraries). Do you have more information on how to implement LDPC decoding using XOR-based operations?
Alas, no. I'm mostly spitballing here since I know that XOR-based codes are obviously much faster than Reed-solomon erasure code / matrix solving methodology.
There's a lot of names thrown out for practical LDPC erasure codes. Raptor Codes, Tornado Codes, and the like. Hopefully those names can give you a good starting point?
EDIT: I also remember reading a paper on a LDPC Fountain Code (ex: keep sending data + LDPC checkbits until the other side got enough to reconstruct the data), as a kind of "might as well keep sending data while waiting for the ACK", kind of thing, which should cut down on latency.
--------
I'm personally on the "Finished reading my book on Reed-Solomon codes. Figuring out what to study next" phase. There's a lot of codes out there, and LDPC is a huge class...
Then again, the project at work that I had that benefited from these error (erm... erasure) correcting codes was complete and my Reed-solomon implementation was good enough and doesn't really need to be touched anymore. So its not like I have a real reason to study this stuff anymore. Just a little bit of extra data that allowed the protocol to cut off some latency and reduce the number of resends in a very noisy channel we had. The good ol' "MVP into shelved code" situation, lol. Enough code to prove it works, made a nice demo that impressed the higher-ups, and then no one ended up caring for the idea.
If I were to productize the concept, I'd research these more modern, faster codes (like LDPC, Raptor, Tornado, etc. etc.) and implement a state-of-the-art erasure correction solution, ya know? But at this point, the projects just dead.
But honestly, the blog-post's situation (cut down on latency with forward error correction) is seemingly a common problem that's solved again and again in our industry. But at the same time, there's so much to learn in the world of Comp. Sci that sometimes its important to "be lazy" and "learn it when its clearly going to be useful" (and not learning it to hypothetically improve a dead project, lol).
I don't think the numbers are accurate in the quantity of gas. Since kWh and BTU are both units of energy, finding the cf of gas is unnecessary (assuming the efficiency numbers are correct).
1 kWh = 3.6 megajoules and 1 BTU = 1055 joules
The 6.6 kWh of the heat pump is 23.76 MJ which is 22,521 BTU of energy. Assuming that the power plant and distribution are 60%, it would take 37,535 BTU of gas to produce (22,521/60%).
Instead, using that 37,535 BTU of gas in an 80% efficient furnace would only produce 30,028 BTU of heat, which is worse than the 50,000 BTU from the heat pump.
I'm pretty sure even a poor heat pump will be more efficient than heating directly with gas. (Of course, they have drawbacks, like they can leak their refrigerant that causes more of a greenhouse effect than CO2.)
> "(Of course, they have drawbacks, like they can leak their refrigerant that causes more of a greenhouse effect than CO2.)"
My heat pump contains 2.1 kg of R32 refrigerant. R32 has a GWP of 675, so that 2.1 kg is the equivalent of 1417 kgs of CO2. (older refrigerants were much worse!)
Heat pumps should never leak their refrigerant during their lifetime, and installers will remove and recycle the refrigerant when servicing or decommissioning systems. But of course, accidents happen, so let's pessimistically assume that 50% of systems installed will eventually leak. In the real world it's hopefully far less than that, but that would mean on average 708 kg CO2e in refrigerant is emitted per system over its lifetime.
On the other hand, heating a typical US home with natural gas emits 2900 kgs of CO2 per year.
I think it's safe to say that the climate impact of refrigerant leaks in modern heat pump systems is minuscule compared to that of the CO2 emitted from natural gas heating.
I dont know where EIA gets those numbers, but that was the basis of my calculation. Maybe I shouldn't have multiplied that by the efficiency of the plant, but rather just taken of distribution losses.
They are averaging the efficiency from the current fleet of gas turbines after subtracting the useful heat output and coming up with 44.4%.
However, it’s a misleading number in multiple ways because the fleet is made up of a mix of low and high efficiency turbines. Grid operators use a mix of turbine types as a cost optimization, a far cheaper and far less efficient turbine that’s only used 1% of the time it worth it. The average number of kWh per cf of gas is therefore heavily in favor of high efficiency turbines.
https://www.reuters.com/world/us/us-appeals-court-rejects-co...