Hacker Timesnew | past | comments | ask | show | jobs | submit | minus7's commentslogin

The `eval` alone should be enough of a red flag

Sadly JS has ways around it that is far from obvious since you can chain effects over multiple files that leads to running code.

Like the following example (you can paste it into node to verify), could be spread out over multiple source files to make it even harder to follow:

  // prelude 1, obfuscate the constructor property name to avoid raising simple analyser alarms
  const prefix = "construction".substring(0,7);
  const suffix = "tractor".substring(3);
  const obfuscatedConstructorName = prefix + suffix; // innocent looking, but we have the indexing name.

  // prelude 2, get the Function class by indexing a function object with our constructor property name (that does not show up in source-code)
  const existingFunction = ()=>"nothing here";
  const InnocentLookingClass = existingFunction[obfuscatedConstructorName];

  // payload decoding elsewhere (this is where we decode our nasty source)
  const nastyPayloadDisguisedAsData = "console.log('sourced string that could be malicious')";

  // Unrelated location where payload gets executed
  const hardToMissFun = new InnocentLookingClass(nastyPayloadDisguisedAsData);
  hardToMissFun(); // when this function is run somewhere.. the nasty things happen.
Unless you have a data-tracing verifier or a sandbox that is continiously run it's going to be very hard to even come close to determining that arbitrary code is being evaluated in this example. Not a single trace of eval or even that the property name constructor is used.

And this is just the simple obfuscation case...

I can easily imagine where we simply disable eval completely, (except in specific scenarios perhaps).

At it's core the issue is dynamism itself. I don't think it should be forsaken, but the issue is, yes, JS does a terrible job of making it explicit. If the language itself required a matter of explicitly, then it would make the whole attack generally more difficult to pull off.


Yeah, I would have loved to see an example where it was not obvious that there is an exploit. Where it would be possible for a reviewer to actually miss it.

See the comment just above.

I'm not a JS person, but taking the line at face value shouldn't it to nothing? Which, if I understand correctly, should never be merged. Why would you merge no-ops?

No it’s not.

OWASP disagrees: See https://cheatsheetseries.owasp.org/cheatsheets/Nodejs_Securi..., listing `eval()` first in its small list of examples of "JavaScript functions that are dangerous and should only be used where necessary or unavoidable". I'm unaware of any such uses, myself. I can't think of any scenario where I couldn't get what I wanted by using some combination of `vm`, the `Function` constructor, and a safe wrapper around `JSON.parse()` to do anything I might have considered doing unsafely with `eval()`. Yes, `eval()` is a blatant red flag and definitely should be avoided.

While there are valid use cases for eval they are so rare that it should be disabled by default and strongly discouraged as a pattern. Only in very rare cases is eval the right choice and even then it will be fraught with risk.

The parent didn't say "there's no legitimate uses of eval", they said "using eval should make people pay more attention." A red flag is a warning. An alert. Not a signal saying "this is 100% no doubt malicious code."

Yes, it's a red flag. Yes, there's legitimate uses. Yes, you should always interrogate evals more closely. All these are true


When is an eval not at least a security "code smell"?

It really is. There are very few proper use-cases for eval.

For a long time the standard way of loading JSON was using eval.

Not that long, browsers implemented JSON.parse() back in 2009. JSON was only invented back in 2001 and took a while to become popular. It was a very short window more than a decade ago when eval made sense here.

Eval for json also lead to other security issues like XSSI.


Problem is, it took until around 2016 for IE6 to be fully dead, so people continued to justify these hacks for a long time. Horrifying times.

And why do we not anymore make use of it, but instead implemented separate JSON loading functionality in JavaScript? Can you think of any reasons beyond performance?

I'd be surprised if there is a performance benefit of processing json with eval(). Browsers optimize the heck out of JSON.

You are arguing against the opposite of what the comment you answered to said.

Am i? "Can you think of any reasons beyond performance?" implies that the comment author thinks performance would be a valid reason.

Quoting my original message:

> And why do we not anymore make use of it, but instead implemented separate JSON loading functionality in JavaScript?

In other words: I'm asking for reasons why was native JSON JavaScript module created, if we already had eval.

> Can you think of any reasons beyond performance?

One of the reasons is that native JSON parser is faster than eval: give some other reason.


Why did you opt in for such a comment while a straight forward response without belittling tone would have achieved the same?

I actually gave it some thought. I had written the actual reason first, but I realized that the person I was responding to must know this, yet keeps arguing in that eval is just fine.

I would say they are arguing that in bad faith, so I wanted to enter a dialogue where they are either forced to agree, or more likely, not respond at all.



JetBrains should stop building stupid AI shit and fix their IDEs. 2025 versions are bordering on unusable.


Issues I observed, mostly using GoLand:

- syntax errors displaying persistently even after being fixed (frequently; until restarted; not seen very recently)

- files/file tree not detecting changes to files on disk (frequent; until restarted; not seen very recently)

- cursor teleporting to specific place on the screen when ctrl is pressed (occasionally; until restarted)

- and most recently: it not accepting any mouse/keyboard input (occasionally; until killed))


Have you made a bug report?


Not iterating on AI is almost certainly suicidal.


All code is inherently not concurrency-safe unless it says so. The http.Client docs mention concurrent usage is safe, but not modification.

The closure compiler flag trick looks interesting though, will give this a spin on some projects.


I agree, any direct / field modification should be assumed to be not-thread safe. OTOH, I think Go made a mistake by exporting http.DefaultClient, because it is a pointer and using it causes several problems including thread safety, and there are libraries that use it. It would have been better if it were http.NewDefaultClient() which creates a new one every time it is called.


I think the original sin of Go is that it neither allows marking fields or entire structs as immutable (like Rust does) nor does it encourage the use of builder pattern in its standard library (like modern Java does).

If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.


> The http.Client docs mention concurrent usage is safe, but not modification.

Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.


On the other hand, it should be very obvious for anyone that has experience with concurrency, that changing a field on an object like the author showed can never be safe in a concurrency setting. In any language.


This is not true in the general case. E.g. setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.

It depends on the platform though (e.g. in Java it is guaranteed that there is no tearing [1]).

[1] In OpenJDK. The JVM spec itself only guarantees it for 32-bit primitives and references, but given that 64-bit CPUs can cheaply/freely write a 64-bit value atomically, that's how it's implemented.


> setting a field to true from potentially multiple threads can be a completely meaningful operation e.g. if you only care about if ANY of the threads have finished execution.

this only works when the language defines a memory model where bools are guaranteed to have atomic reads and writes

so you can't make a claim like "setting a field to true from ... multiple threads ... can be a meaningful operation e.g. if you only care about if ANY of the threads have finished execution"

as that claim only holds when the memory model allows it

which is not true in general, and definitely not true in go

assumptions everywhere!!


> can never be safe in a concurrency setting. In any language.

Then I give an example of a language where it's safe

I don't get your point. The negation of all is a single example where it doesn't apply.


GP didn’t say “setting a ‘bool’ value to true”, it referred to setting a “field”. Interpreted charitably, this would be done in Go via a type that does support atomic updates, which is totally possible.


"setting a field to true" clearly means `x.field = value` and not `x.field.Set(value)`


I saw that bit about concurrent use of http.Client and immediately panicked about all our code in production hammering away concurrently on a couple of client instances... and then saw the example and thought... why would you think you can do that concurrently??


the distinction between "concurrent use" and "concurrent modification" in go is in no way subtle

there is this whole demographic of folks, including the OP author, who seem to believe that they can start writing go programs without reading and understanding the language spec, the memory model, or any core docs, and that if the program compiles and runs that any error is the fault of the language rather than the programmer. this just ain't how it works. you have to understand the thing before you can use the thing. all of the bugs in the code in this blog post are immediately obvious to anyone who has even a basic understanding of the rules of the language. this stuff just isn't interesting.


> Subtle linguistic distinctions are not what I want to see in my docs, especially if the context is concurrency.

Which PL do you use then ? Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.


> Because even Rust makes "Subtle linguistic distinctions" in a lot of places and also in concurrency.

Please explain


Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.

Anyways, the article author lacks basic reading skills, since he forgot to mention that the Go http doc states that only the http client transport is safe for concurrent modification. There is no "subtlety" about it. It directly says so. Concurrent "use" is not Concurrent "modification" in Go. The Go stdlib doc uses this consistently everywhere.


> Runtime borrow checking: RefCell<T> and Rc<T>. Can give other examples, but admittedly they need `unsafe` blocks.

Where are the “subtle linguistic distinctions”? These types do two completely different things. And neither are even capable of being used in a multithreaded context due to `!Sync` (and `!Send` for Rc and refguards)


I did say "runtime borrow checking" ie using them together. Example: `Rc::new(RefCell::new(value));`. This will panic at runtime. Maybe I should have used the phrase "dynamic borrowing" ?

https://play.rust-lang.org/?version=stable&mode=debug&editio...

You don't need different threads. I said concurrency not multi-threading. Interleaving tasks within the same thread (in an event loop for example) can cause panics.


I understand what you meant (but note that allocating an Rc isn’t necessary; &RefCell would work just fine). I just didn’t see the “subtle linguistic distinctions” - and still don’t… maybe you could point them out for me?

https://doc.rust-lang.org/stable/std/cell/struct.RefCell.htm...

https://doc.rust-lang.org/stable/std/cell/struct.RefCell.htm...


Yeah, it is a crappy example. Ignore me. I just re-read and the rustdoc has no “subtle linguistic distinctions”.


Runtime borrow checking panics if you use the non-try version, and if you're careful enough to use try_borrow() you don't even have to panic. Unlike Go, this can never result in a data race.

If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.


Not GP but off the top of my head: async cancellation, mutex poisoning, drop+clone+thread interactions, and the entire realm of unsafe (which specific language properties no longer hold in an unsafe block? Is undefined behavior present if there’s a defect in unsafe code, or just incorrect behavior? Both answers are indeed subtle and depend on the specifics of the unsafe block). And auto deref coercion, knowing whether a given piece of code allocates, and “into”/turbofish overload lookup, but those subtleties aren’t really concurrency related.

I like Rust fine, but it’s got plenty of subtle distinctions.


In my experience that's because most users run their monitors on 100% brightness instead of turning that down. I prefer light mode unless it's really dark.


I suspect a lot of the comfort preferences come from there.

The average monitor has a brightness level equivalent to screaming in a study room, and a color calibration that assumes fluorescent office lighting.


I was excited that Firefox finally exposed its local translations as API, but it's Chrome-only (still?). Will be nice for userscripts, for example to replace Twitter's translation button that hardly ever works


> I was excited that Firefox finally exposed its local translations as API, but it's Chrome-only (still?).

Bacause it was, is, and will be Chrome-only for the forseeable future: https://qht.co/item?id=44375326


If you consult with someone over their project, then proceed to fork it behind their back, that's just being a dick, even if it was perfectly legal. We should not accept that kind of behavior. And that's even ignoring that the consultation was unpaid and the project was actually even stolen.


> We should not accept that kind of behavior.

What exactly is this supposed to mean? We will not be asked. Only alienated teens care if strangers "accept" them.


There's a pinned comment on the video about the cable:

    For everyone that's interested, the included USB cable is wired like this:
    
    GND    -------> GND
    D+  -------> VCC
    D-   -------> VCC
    VCC -------> VCC
    
    So it is a non-standard cable! I measured by checking continuity between the USB A plug, and the USB C connector with a USB A adapter on it. None of my probes are small enough for a USB C connector's pins directly.

    There was a bit in the video about this that ended up on the cutting room floor, my bad!


Where do you even get a cable like that? If the answer is "they made it themselves" why go with a USB-C connector then? Seems like any old barrel connector would be both cheaper and easier to work with.

Unless it was really to put "USB-C" on the box.


No one's gonna pay just for the sidebar being collapsible. Maybe someone is going to because it was the last straw, but most are just going to be annoyed. Better ask for money only for something with real value (which I'm sure the pro tier also includes).


Hey,

I found your library a few weeks ago when I was annoyed by nothing like this being built into the standard library. It’s been a breeze to use so far.

A neat trick I found to gauge bottlenecks in pipelines is using Buffers between steps and running a goroutine that periodically prints `len(buffered)/cap(buffered)`


Thank you very much for the feedback. I thought about something similar some time ago. Buffer of size one, then measure the average time each item spends in the buffer. But for debugging your approach is simpler and more practical.


Used to vendor for a bit, but it's messy in git. If anything disappears, it should still be easy enough to recover it from somewhere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: