Hacker Timesnew | past | comments | ask | show | jobs | submit | T-R's commentslogin

I'm a huge fan of it - I was never able to get one while they were officially available, I only managed to rent one from Blockbuster a few times. I eventually picked one up on eBay in the early 2000's for $60, before they became collectors items.

The appeal is definitely somewhere between intrinsic and from the rarity - there's not just been no official way to play these games for the last 30 years, they've barely even gotten a passing mention, and even emualators have been few and far between. It was sad that Teleroboxer, Wario Land, and Mario Clash were basically lost to time.

The games definitely weren't up to to NES/SNES quality, but they were at least up to par with the average portable game at the time, which I think is what they were meant to be compared against. I see the Virtual Boy games as Game Boy games that suffered from the mechanics of the platform the platform they were released on. In that light, I think they compare pretty favorably to similar lesser-known games of the time; most WonderSwan games I've played, hell even a lot of Game Gear and Game Boy games, had much more serious gameplay issues, even though they were on hardware that was arguably less... challenging.


> The games definitely weren't up to to NES/SNES quality, but they were at least up to par with the average portable game at the time, which I think is what they were meant to be compared against.

Which I think is the problem: The system wasn't really portable. When 3D was the rage, it was really a 2D system. I used the system at home when I was in the mood for "big" games.

Seems like it would have made more sense to make a 3D headset for the N64 as a high-end accessory.


The article seems to assume readers are already familiar with the context, or maybe that they'll stop to read the original paper. For those who aren't familiar, Selective Applicative Functors were presented as part of the development of the Haxl library at Facebook (after they hired a series of prominent Haskellers like author of "Real World Haskell", Bryan O'Sullivan, and GHC Co-Developer Simon Marlow). Haxl is a batching framework for Haskell (to, e.g., solve the "N+1 database query problem"), which later inspired the various DataLoader libraries in other language ecosystems.

In Haskell, there's a lot of desire to be able to write effectful code as you normally would, but with different types to do things like restrict the available actions (algebraic effects) or do optimizations like batching. The approaches generally used for this (Free Monads) do this by producing a data structure kind of like an AST; Haskell's "do" notation transforms the sequential code into Monadic "bind" calls for your AST's type (like turning .then() into .flatMap() calls, if you're from Javascript), and then the AST can be manipulated before being interpreted/executed. This works, but it's fundamentally limited by the fact that the "bind" operation takes a callback to decide what to do next - a callback is arbitrary code - your "bind" implementation can't look into it to see what it might do, so there's no room to "look ahead" to do runtime optimization.

Another approach is to slide back to something less powerful than Moands, Applicative Functors, where the structure of the computation is known in advance, but the whole point of using Monads is that they can decide what to do next based on the runtime results of the previous operation - that they accept a callback - so by switching to Applicatives, by definition you're giving up the ability to make runtime choices like deciding not to run a query if the last one got no results.

Selective Functors were introduced as a middle ground - solidifying the possible decisions ahead of time, while still allowing decisions based on runtime information - for example, choosing from a set of pre-defined SQL queries, rather than just running a function that generates an arbitrary one.


Abstract Algebra, looked at through the lens of Programming, is kind of "the study of good library interface design", because it describes different ways things can be "composable", like composing functions `A -> B` and `B -> C`, or operators like `A <> A -> A`, or nestable containers `C<C<T>> -> C<T>`, with laws clearly specifying how to ensure they don't break/break expectations for users, optimizers, etc. Ways where your output is in some sense the same as your input, so you can break down problems, and don't need to use different functions for each step.

Category Theory's approach of "don't do any introspection on the elements of the set" led it to focus on some structures that turned out to be particularly common and useful (functors, natural transformations, lenses, monads, etc.). Learning these is like learning about a new interface/protocol/API you can use/implement - it lets you write less code, use out-of-the-box tools, makes your code more general, and people can know how to use it without reading as much documentation.

Focusing on these also suggests a generally useful way to approach problems/structuring your code - rather than immediately introspecting your input and picking away at it, instead think about the structual patterns of the computation, and how you could model parts of it as transformations between different data structures/instances of well-known patterns.

As a down-to-earth example, if you need to schedule a bunch of work with some dependencies, rather than diving into hacking out a while-loop with a stack, instead model it as a DAG, decide on an order to traverse it (transform to a list), and define an `execute` function (fold/reduce). This means just importing a graph library (or just programming to an interface that the graph library implements) instead of spending your day debugging. People generally associate FP with recursion, but the preferred approach is to factor out the control flow entirely; CT suggests doing that by breaking it down into transformations between data structures/representations. It's hugely powerful, though you can also imagine that someone who's never seen a DAG might now be confused why you're importing a graph library in your code for running async jobs.


I definitely agree for traversals, but Lenses need some sort of primitive support - even in Haskell they're mostly generated with TemplateHaskell, and the language developers have spent a long time trying to make the `record.field` accessor syntax overloadable enough to work with lenses[1][2]. Hopefully someday we'll be free from having to memorize all the lens operators.

Optics are famously abstract in implementation, but I don't think people have trouble applying them - people seem to like JQuery/CSS selectors, and insist on `object.field` syntax; it's kind of wild that no mainstream language has a first-class way to pass around the description of a location in an arbitrary data structure.

[1] https://ghc-proposals.readthedocs.io/en/latest/proposals/002...

[2] https://ghc-proposals.readthedocs.io/en/latest/proposals/015...



Optics let you concisely describe the location, but defer the dereferencing, so you could definitely approximate optics, not by passing around pointers you compute with `offsetof`, but passing around functions that use `offsetof` to return memory locations to reference (read/write to). You could certainly write a composition operator for `*(*T) => List<*R>`... Some people have done something like it[1][2]:

    Account acc = getAccount();
    QVERIFY(acc.person.address.house == 20);

    auto houseLens = personL() to addressL() to houseL();
    std::function<int(int)> modifier = [](int old) { return old + 6; };
    
    Account newAcc2 = over(houseLens, newAcc1, modifier);
These also use templating to get something that still feels maybe a little less ergonomic than it could be, though.

[1] https://github.com/graninas/cpp_lenses [2] https://github.com/jonsterling/Lens.hpp


> should I implement the Monad, Applicative or Functor type class?

I struggled with this when I first learned Haskell. The answer is "yes, if you can". If you have a type, and you can think of a sane way to implement `pure`, `fmap`, and `bind` that doesn't break the algebraic laws, then there's really no drawback. Same for any typeclass. It gives users access to utility functions that you might not really have to document (because they follow a standard interface) and you might not even have to maintain (when you can just use `deriving`).

Doing so will let you/users write cleaner code by allowing use of familiar tools like `do` notation, or functions from libraries that say they'll work for any Monad. It saves you from coming up with new names for those functions, and saves users from having to learn them; if I see something's a Monad, I know I can just use `do` notation; if I see something's a Monoid, I know I can get an empty one with `mempty` and use `fold` with it. As long as it's not a really strange Monad, and it doesn't break any laws, it probably just works the way it looks like it does.

If you can define `bind` et. al., but it breaks the laws, it means the abstraction is leaky - things might not work as expected, or they might work subtly differently when someone refactors the code. Probably don't do that.

If you don't implement a typeclass that you could have, it just means you might have written some code where you could've used something out of the box. Same as going through old code and realizing "this giant for-loop could've just been a few function calls if I used underscore/functools or generators".

That said, it's not too common to stumble on a whole new Monad. The Tweet type probably isn't a Monad - what does it mean for a Tweet to be parameterized on another type like `Int`, as in `Tweet<Int>`? What would it mean to `flatMap`(`bind`) a function like `Int -> Tweet<String>` on it? A Tweet is probably just a Tweet. On the other hand, it's a little easier to imagine what a `JSON<Int>` might be, and what applying a function like `Int -> JSON<String>` to it might reasonably do. Or what applying an `Int -> Graph<String>` to a `Graph<Int>` might do.

Most Monads in practice are combinations of well known ones. Usually you'll be writing some procedural code in IO, or working with a parser, and realize "I'm writing a lot of code checking for errors", "I'm tired of explicitly passing this same argument", or "I need some temporary mutable storage", or some other Effect - so you wrap up the Monad you're using with a Monad Transformer like `ExceptT`, `ReaderT`, or `StateT` in a `newtype`, derive a bunch of typeclasses, and then just delete a bunch of messy code.


Thinking too concretely about monads as boxes might make the behavior of the ListT monad transformer seem a bit surprising... unless you were already imagining your box as containing Schrodinger's cat.

I can definitely understand the author taking offense to the interaction, but now that a lot more programmers have had some experience with types like Result<T> and Promise<T> in whatever their other favorite typed language with generics is, the box/container metaphors are probably less helpful for those people than just relating the typeclasses to interfaces, and pointing out that algebraic laws are useful for limiting the leakiness of abstractions.


A good place to start is with the original Game Boy - you can probably gather from the article that, to build a GBA emulator, you need to support some aspects of the Game Boy anyway (notably the sound chip). That would give you a simpler place to start getting familiar with the general concepts (like memory-mapped IO, graphics hardware, etc) and structure (like syncing emulation speed to the sound buffer) as a foundation before jumping into the more complicated GBA.

There's tons of tutorials[1], documentation[2], and tests[3] for building a Game Boy emulator, not to mention existing implementations in every language under the sun. Your favorite LLM has also surely already read them all, so you can have as much hand-holding as you'd like.

[1] https://www.youtube.com/watch?v=e87qKixKFME&list=PLVxiWMqQvh...

[2] https://gbdev.io/pandocs/

[3] https://github.com/c-sp/game-boy-test-roms


Wow thank you so much for all those information, will help me a lot!


Shinjuku station has also changed dramatically from all the construction over the last 10 or so years. I lived in Shinjuku (the ward; I was actually a few stations north on the Oedo line) in 2009/2010; was back there a few weeks ago and it was unrecognizeable. Right now, the whole area over by the Yodobashi Camera, and where they used to have the night bus pickup is all walled off under active construction (if they haven't finished already).


Alexis also just posted a video on the GHC optimization pipeline:

https://www.youtube.com/watch?v=fdyh3YQ-ZWI


I think, coming from other languages, it takes a while to really absorb how much you can rely on types to tell you what a function is doing - to the degree that for some functions you can infer the implementation from them[1].

That said, Haskell documentation tends to be really lacking on some of the basics that other ecosystems get right, like where to start, shielding beginners from functions for more niche usecases, examples of how to build or deconstruct values, or end-to-end case studies. Some of the nicest things to _use_ have an API that primarily comes from typeclasses in base, and don't provide any documentation beyond a list of typeclasses. My impression has been that most Haskellers seem to be on the same page that those things need work, though - I'm optimistic about the situation improving, but it's still hard to recommend to anyone expecting the experience of working with popular libraries from more mainstream ecosystems.

[1] https://bartoszmilewski.com/2014/09/22/parametricity-money-f...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: