Well, somehow I missed this one when looking for a GUI library in https://areweguiyet.com some time ago, and I was very surprised just now when it compiled and ran at first try:
git clone https://github.com/hecrj/iced.git
cd iced/
cargo run --package tour
This ran the Tour example without any glitches, warnings, missing dependencies or any other annoyance. Don’t know yet whether this has what I need for my applications, but in any case it has left a very pleasant impression.
Don't forget to also consider what your users might need. So, unless you're developing a game or an application that does something inherently visual (e.g. a graphics editor), the lack of accessibility support (at least in the native version) should be a show-stopper. I'm a little surprised it's not even on the roadmap.
I appreciate that, but there are so many GUI toolkits out there, I can't possibly do the hard work of implementing the platform accessibility APIs for all of them. So I figure the best I can do is warn developers away from toolkits that don't already have this functionality.
Although, how does that even work, hooking into OS accessibility from a GUI rendered custom from-scratch? Is there an API for that that's divorced from the native OS GUI?
Windows is the one I know best. The current native accessibility API for Windows is called UI Automation. It's not tied to any high-level UI framework, though a UIA tree needs to be associated with a window handle (HWND).
Mac and iOS have Objective-C accessibility APIs as part of their respective native UI frameworks (AppKit and UIKit). I have cursory knowledge of those APIs from a previous job but never implemented either of them from scratch. Android likewise has a Java-based accessibility API as part of its framework. Yes, this means that non-Java-based toolkits have to do JNI bridging to implement accessibility on Android. sigh I've never done this myself either.
The only desktop environment for the free Unixes that has full-featured accessibility support, particularly for blind people, is GNOME, which has a D-Bus-based accessibility API called AT-SPI. GTK implements this API through a module called ATK. Qt also implements it. BTW, the Orca screen reader for GNOME was originally developed by a friend of mine.
Probably the best place to find an implementation of all of these APIs is one of the open-source web browser engines. Note though that on Windows, these engines implement a legacy accessibility API called MSAA (Microsoft Active Accessibility) and an unofficial extension of that API called IAccessible2. Chromium has a work-in-progress native UI Automation implementation, largely developed by the Microsoft Edge team.
Disclosure: I work at Microsoft, on the team that develops UI Automation and the Narrator screen reader.
Personally, i would not call Gnome as the most accessible as of now. From personal experiences, Mate fares better in this regard. Of course, it has its quirks as well.
Personally, I don't think developers should be mimicking Elm or React when they have the opportunity to build a custom gui library from scratch. Elm/React are helpful for overcoming the inherent problems in the html component development, but that requires them to be more ineffecient then typical native gui frameworks. Simple views are easy enough to handle, but these functional style guis don't tend to deal with complex views very well.
The problems in HTML component development are largely the same as in any other GUI - state management. I've run into plenty of UI state bugs across many, many applications using "typical native gui frameworks", both open-source and proprietary - I don't think it's a solved problem.
Wiring views to reflect model state correctly is a problem that exists regardless of framework, how effeciently your framework deals with state changes from single or multiple sources is the problem I have with these pure functional frameworks. We've recently backed away from React Native because of the performance problems inherent in its design.
Let alone some GUI libraries like this one are starting to require hw-accelerated web canvases + vulkan for drawing on the screen just for the requirement to have a cross-platform GUI on the desktop these days which is quite frankly unnecessary in this use-case, unless you're developing a game.
In the end, constructing a GUI from that would be questionable and would end up being even more uglier than a barebones Qt5 example. You might as well create your own in-game GUI library or use ImGUI with Rust bindings.
> Let alone some GUI libraries like this one are starting to require hw-accelerated web canvases + vulkan for drawing on the screen just for the requirement to have a cross-platform GUI on the desktop these days which is quite frankly unnecessary in this use-case, unless you're developing a game.
Web browser and OS UI toolkits are also GPU accelerated these days. In fact, that's completely silly to do any kind of rendering on the CPU when you have a GPU available! It's just way slower and much more expensive in terms of power consumption.
> It's just way slower and much more expensive in terms of power consumption.
really depends on what you want to draw. there is still nothing that renders fonts with freetype quality on the GPU and rendering paths / drawings / svg-like stuff is still a pain as the only GPU maker who cares about 2D drawing is NVidia.
That was true a few years ago, but fortunately the state of the art for 2D rendering on GPU has been moving forward. Among others, Pathfinder 3 is quite comparable in quality to Freetype, and there are lots of good results coming out for performant vector drawing on GPU, including my own piet-metal work.
The efficiency argument is appropriate for things like real-time games, where response time should be as fast as possible.
For the other 99% of apps, a clean, clear mental model for the developer is pretty nice to have, and this looks like a very nice implementation. I'd use it!
I think SwiftUI was a waste of genuine effort that Apple could have put toward making their existing component model easier to work with. Things like adding stylesheets or simplifying layout constraints, or just adding @State binding to existing components. SwiftUI tries to look like react but really it's just normal databinding+restricted access to the direct component tree.
SwiftUI is so much better than what came before it it’s a joke. Minus many bugs and missing features, SwiftUI is the best thing that’s come out of Apple’s dev work maybe ever.
> that requires them to be more ineffecient then typical native gui frameworks
Inefficient in what way? Performance? Is it noticable? Why should I care unless it's noticable and hindering UX? Should we all be writing out GUIs in assembly then?
It looks very Flutter-like, with Column, Text, and other elements. It seems it renders to canvas potentially, not sure though, if that's how the cross platform compatibility works, also like Flutter.
I notice that in many of these GUI frameworks that effort is duplicated. Flutter creates a lot of widgets for iOS and Android, but if you don't want to use Dart, we'll, you'll have to reimplement those widgets yourself again. Instead, what should happen is that they should compile to a standard format, most likely WASM, and then you can use any language (that targets WASM, which I think will be most of them in the future) to interface with those widgets or components. They can be in the form of a canvas renderer such as Skia, which is what Google uses for Chrome and Flutter, and also used by some other GUI projects like Revery for ReasonML. This way, there's one common format and people can write the functionality code in whatever language they'd like.
Also known as .NET and COM/UWP on Windows, and as experience proves, each of the languages that wants a piece of the pie has either to grow language extensions, or the controls have to be constrained to the API space that can be safely exposed to all bindings, and the UI designers don't get to work with all of them.
>[widgets] should compile to a standard format, most likely WASM, and then you can use any language (...) to interface with those widgets or components.
Or be based on a standard canvas-level API that would be easy to support in most languages (à la OpenGL, but much easier to use for 2D, and more complete (windowing, user events, text, images, etc.), or like P0267R0 but not confined to a particular language).
At least in the web version is uses the web browser's own element types (text boxes, radio buttons etc) so it could be as accessible as as anything else. I'd expect that to be an option for bindings to OS-native UI kits too.
Like pretty much all the other GUI attempts going on in the world of rust, it is built on top of wgpu which does not have any OpenGL support ("yet"). The older toolkits were built on gfx-rs which had much better support for different gpu toolkits, but was pretty much deprecated and replaced with a different project altogether.
Iced itself appears to be largely an API definition (and project to implement that API), there's currently two largely independent backends - iced_winit and iced_web. They provide their own widget structs, event loops, and so on.
There's therefore enough of an abstraction which could be used to get Iced to create Windows/Cocoa/GTK/Qt native widgets (in much the same way it currently creates VDOM elements for the web, thus getting some level of accessibility by default). Then Iced could provide additional a11y information to the backend by means of properties on the virtual objects.
What? This is one of the areas where type safety truly shines. Type safe languages force you to confirm the nature of the input and deal with the edge cases.