I'm involved with this project and wanted to provide some context. This is an extraction for a much larger effort where we're building a web browser that can render native UI. Think instead of:
`<div>Hello, world!!</div>`
we can do:
`<Text>Hello, world!</Text>`
I want to be clear: this is not a web renderer. We are not rendering HTML. We're rendering actual native UI. So the above in SwiftUI becomes:
`Text("Hello, world!")`
And yes we support modifiers via a stylesheet system, events, custom view registration, and really everything that you would normally be doing it all in Swift.
Where this library comes into play: the headless browser is being built in Elixir to run on device. We communicate with the SwiftUI renderer via disterl. We've built a virtual DOM where each node in the vDOM will have its own Erlang process. (I can get into process limit for DOMs if people want to) The Document will communicate the process directly to the corresponding SwiftUI view.
We've taken this a step further by actually compiling client-side JS libs to WASM and running them in our headless browser and bridging back to Elixir with WasmEx. If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework. So think of actual native targets for Hotwire, LiveWire, etc...
We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
This originally started as the LiveView Native project but due to some difficulties collaborating with the upstream project we've decided to broaden our scope.
Swift's portability means we should be able to bring this to other languages as well.
We're nearing the point of integration where we can benchmark and validate this effort.
Happy to answer any questions!
adastra22 · 1h ago
> If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework.
You appear to be saying this with a straight face. I must be missing something here. What is beneficial about the web model that native is lacking?
I hope I’m not being an old curmudgeon, but I’m genuinely confused here. To me, web dev is a lovecraftian horror and I’m thankful everyday I don’t have to deal with that.
Native dev is needlessly fragmented and I’ve longed for a simple (not Qt) framework for doing cross platform native app dev with actual native widgets, so thanks for working on that. But I a bit mystified at the idea of making it purposefully like web dev.
sghiassy · 11h ago
I prototyped this exact thing by parsing the DOM, building corresponding UIKit elements and styling them via CSS.
My output looked exactly like an embedded WebKit UIView though - so then the problem became - what was I making that was appreciably better?
adastra22 · 1h ago
That was exactly my question. It’s not like HTML + CSS / DOM is actually a good model for this domain. It’s just what we’ve been stuck with.
jacquesm · 1h ago
It's going to be really hard to resist the urge to put a programming language in there. It always starts innocent: 'let's do some validation'. Before you know it you're Turing complete.
jbverschoor · 36m ago
What's the difference between NativeScript ui and ReactNative ui?
ksec · 14h ago
>If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework.
Holy this will be much bigger than I thought! Cant wait to see it out.
junon · 12h ago
Sounds like things are converging more or less where I thought they would: "websites" turning into live applications, interfacing with the native UI, frameworks, etc. using a standardized API. Mainframes maybe weren't the worst idea, as this sort of sounds like a modern re-imagining of them.
The writing was more or less on the wall with WASM. I don't know if this project is really The Answer that will solve all of the problems but it sounds like a step in that direction and I like it a lot, despite using neither Swift nor Erlang.
AnonC · 8h ago
> We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
Could you please elaborate on the statement about Apple Watch? Apple Watch can connect to WiFi directly with Bluetooth off on its paired iPhone. Specific variants also support cellular networks directly without depending on the paired iPhone. So is it something more nuanced than the networking part that’s missing in Apple Watch?
eob · 7h ago
Third party apps can’t use the network though. Iirc there’s an async message queue with eventual delivery that each app gets, which it can use to send messages back and forth with a paired phone app.
sethaurus · 34m ago
That was once the case, but no longer. Third-party WatchOS apps can work without a phone present, up to being installed directly from the watch's app store. They can definitely do independent networking, but there are still some restrictions, eg they can't do it when backgrounded, and websockets are pretty locked down (only for audio-streaming as per Apple policy).
I reckon the lack of general-purpose websockets is probably the issue for a system based on Phoenix LiveView.
Firefox used XUL, not XAML. Still does, for some things that are not available in HTML. (By the way, you can enable devtools for the browser UI itself and take a look!)
bcardarella · 12h ago
XAML will be a target as we intend to build a WinUI3 client. Of the big three native targets: Apple, Android, Windows the later may be the easiest as from what I've seen nearly everything is in the template already
tough · 10h ago
woah so this is like a react-native/expo done in a sane way ???? with swift?
damn
mac-mc · 6h ago
With how complexity happy webdevs like to get with their DOM structure, would this actually be performant compared to an equivalent webview in practice? Especially since your using SwiftUI which has a lot more performance foot guns compared to UIKit.
aatd86 · 14h ago
I believe swiftUI doesn't give access to the UI tree elements unlike UIkit. So I assume you're not allowing the use of the xml-like code to be in control of the UI?
It's rather just an alternative to write swiftUI code?
How do you handle state? Isomorphically to what is available in swiftUI?
Is your VDOM an alternate syntax for an (Abstract) Syntax tree in fact?
Is it to be used as an IR used to write swiftUI code differently?
How is it different from Lynx? React Native? (probably is, besides the xml like syntax, again state management?)
Quite interesting !
bcardarella · 13h ago
That's correct, but we can make changes to the views at runtime and these merge into the SwiftUI viewtree. That part has been working for years. As far as how we take the document and convert to SwiftUI views, there is no reflection in Swift or runtime eval. The solution is pretty simple: dictionary. We just have the tag name of an element mapped to the View struct. Same with modifiers.
bcardarella · 13h ago
As far as how it is different from React Native. That's a good question, one that I think is worth recognizing the irony which is that, as I understand it, without React Native our project probably wouldn't exist. From what I understand RN proved that composable UI was the desired UX even on native. Prior to RN we had UIKit and whatever Android had. RN came along and now we have SwiftUI and Jetpack Compose, both composable UI frameworks. We can represent any composable UI frameworks as a markup, not so much with the prior UI frameworks on native, at least not without defining our own abstraction above them.
As far as the differentiator: backend. If you're sold on client-side development then I don't think our solution is for you. If however you value SSR and want a balnance between front end and backend that's our market. So for a Hotwire app you could have a Rails app deployed that can accept a "ACCEPT application/swiftui" and we can send the proper template to the client. Just like the browser we parse and build the DOM and insantiate the Views in the native client. There are already countless examples of SSR native apps in the AppStore. As long as we aren't shipping code it's OK, which we're not. Just markup that represents UI state. The state would be managed on the server.
Another areas we differ is that we target the native UI framework, we don't have a unified UI framework. So you will need to know HTML - web, SwiftUI - iOS, Jetpack Compose - Android. This is necessary to establish the primitives that we can hopefully get to the point to build on top of to create a unified UI framework (or maybe someone solves that for us?)
With our wasm compilation, we may even be able to compile React itself and have it emit native templates. No idea if that would work or not. The limits come when the JS library itself is enforcing HTML constraints that we don't observe, like case sensitive tag names and attributes.
What about offline mode? Well for use cases that don't require it you're all set. We have lifecycle templates that ship on device for different app states, like being offline. If you want offline we have a concept that we haven't implemented yet. For Elixir we can just ship a version of the LV server on device that works locally then just does a datasync.
wahnfrieden · 11h ago
Apple allows shipping code over internet for many years now. They just don’t allow dramatically changing the purpose of the app etc
bcardarella · 11h ago
Apple also doesn't allow JITs, and you cannot hot load code to modify the application.
mac-mc · 6h ago
You can if it's javascript, and this sounds like a bunch of "Server Side UI" type of projects that many large tech companies end up starting.
wahnfrieden · 2h ago
Or any WASM-target language
syndeo · 8h ago
I've heard a blind eye is turned to React Native code changes, as long as it's not something drastic (or outright malicious like Epic).
wahnfrieden · 2h ago
You don't need JIT to hot load code. That's irrelevant.
And yes you can hot load code to modify the application. As long as you don't alter the purpose or scope of features under review. There is a specific callout as well that you can dynamically load in "casual games" from a community of contributing creators.
You're repeating outdated nonsense from over a decade ago! Understanding current App Store guidelines can be key for finding competitive edge when there are so many like yourself who scare devs off doing things that Apple now allows.
Not sure as I haven't done any work with it. On a cursory glance it could have some overlap but it appears to not target the 1st class UI frameworks. It looks to be a UI framework unto itself. So more of a Flutter than what we're doing is my very quick guess. We get major benefits from targeting the 1st class UI frameworks, primarily being we let them do the work. Developing a UI native framework I think is way way more effort than what we've done so we let Apple, Google, and Microsoft to decide what the desired user experience is on their devices. And we just allow our composable markup to represent those frameworks. A recent example of this is with the new "glass" iOS 26 UI update. We had our client updated for the iOS 26 beta on day 1 of its release. Flutter has to re-write their entire UI framework if they want to adapt to this experience.
Jonovono · 15h ago
Is there somewhere to follow this project? Sounds really interesting
How does elixir_pack work? Is it bundling BEAM to run on iOS devices? Does Apple allow that?
Years ago I worked at Xamarin, and our C# compiler compiled C# to native iOS code but there were some features that we could not support on iOS due to Apple's restrictions. Just curious if Apple still has those restrictions or if you're doing something different?
we compile without the JIT so we can satisfy the AppStore requirements
toast0 · 6h ago
I haven't been following BeamAsm that closely, because I'm not working in Erlang at work.... But it strikes me that there's not really a reason that the JIT has to run at runtime, although I understand why it is built that way. If performance becomes a big issue, and BeamAsm provides a benefit for your application (it might not!), I think it would be worth trying to figure out how to assemble the beam files into native code you can ship onto restrictive platforms without shipping the JIT assembler.
Jonovono · 15h ago
Wow, unreal, thanks! I was playing around with Liveview native awhile back. Will definitely be following this along
arcanemachiner · 12h ago
> some difficulties collaborating with the upstream project
Can you elaborate on this?
victorbjorklund · 14h ago
In what way will this be different from Liveview Native?
bcardarella · 13h ago
We're delivering LVN as I've promised the Elixir community this for years, from LVN's perspective nothing really changes. We hit real issues when trying to support live components and nested liveviews, if you were to look at the liveview.js client code those two features make significant use of the DOM API as they're doing significant tree manipulation. For the duration of this project we've been circling the drain on building a browser and about three months ago I decided that the just had to go all the way.
arcanemachiner · 12h ago
I hope I'm not reading into this too cynically, but your phrasing makes it sound like the project is not going as well as originally hoped.
It's pretty well-established at this time that cross-platform development frameworks are hard for pretty much any team to accomplish... Is work winding down on the LiveView Native project, or do you expect to see an increase in development?
bcardarella · 12h ago
The LVN Elixir libraries are pretty much done and those really shouldn't change out side of perhaps additional documentation. I have been back and forth on the 2-arity function components that we introduced. I may change that back to 1-arity and move over to annotating the function similar to what function components already support. That 2-arity change was introduced in the current Release Candidate so we're not locked in on API yet.
What is changing is how the client libraries are built. I mentioned in another comment that we're building a headless web browser, if you haven't read it I'd recommend it as it gives a lot of detail on what we're attempting to do. Right now we've more or less validated every part with the exception of the overall render performance. This effort replaces LVN Core which was built in Rust. The Rust effort used UniFFI to message pass to the SwiftUI client. Boot time was also almost instant. With The Elixir browser we will have more overhead. Boot time is slower and I believe disterl could carry over overhead than UniFFI bindings. However, the question will come down to if that overhead is significant or not. I know it will be slower, but if the overall render time is still performant then we're good.
The other issue we ran into was when we started implementing more complex LiveView things like Live Components. While LVN Core has worked very well its implementation I believe was incorrect. It had passed through four developers and was originally only intended to be a template parser. It grew with how we were figuring out what the best path forward should be. And sometimes that path meant backing up and ditching some tech we built that was a dead end for us. Refactoring LVN Core into a browser I felt was going to take more time than doing it in Elixir. I built the first implementation in about a week but the past few months has been spent on building GenDOM. That may still take over a year but we're prioritizing the DOM API that LiveView, Hotwire, and Livewire will require. Then the other 99% of DOM API will be a grind.
But to your original point, going the route of the browser implementation means we are no longer locked into LiveView as we should be able to support any web client that does similar server/client side interactivity. This means our focus will be no longer on LiveView Native individually but ensuring that the browser itself is stable and can run the API necessary for any JS built client to run on.
I don't think we'd get to 100% compatibility with LiveView itself without doing this.
arcanemachiner · 11h ago
Oh wow, that actually sounds very promising! Thanks for the follow-up.
carson-katri · 16h ago
I worked on this project, thanks for sharing it! This is part of a larger otp-interop GitHub org with projects for things like running BEAM on mobile devices, disterl over WebSockets, filtering messages sent over disterl, etc. Happy to answer any questions about the project.
__turbobrew__ · 16h ago
I have read the erlang/OTP doesn’t work well in high latency environments (for example on a mobile device), is that true? Are there special considerations for running OTP across a WAN?
bcardarella · 15h ago
I hate to say this but usually when I hear that people have problems making Erlang/Elixir fast it comes down to a skill issue. Too often devs explore coming from another language and implement code as they would from that other language in Elixir and then see it's not performant. When we've dug into these issues we usually find misunderstandings on how to properly architect Elixir apps to avoid blocking and making as much use of distribution as possible.
__turbobrew__ · 12h ago
Ok, I just ask because I recently read the “BEAM book” and it explicitly calls out that OTP is designed to only run within a single datacenter.
bcardarella · 12h ago
You'd have to refer to all of the applications running on the BEAM that are distributed across multiple datacenters. Fly.io's entire business model is predicated on globally distributing your application using the BEAM. I'm not sure what that book said exactly perhaps the original intent was local distribution but Erlang has been around for over 30 years at this point. What it's evolved into today is architecturally unique compared to any other language stack and is built for global distribution with performance at scale.
SwiftyBug · 1h ago
I didn't know that Fly.io uses Elixir! They even have an entire blog dedicated to their use of Elixir: https://fly.io/phoenix-files/
> Even though Erlang’s asynchronous message-passing model allows it to handle network latency effectively, a process does not need to wait for a response after sending a message, allowing it to continue executing other tasks. It is still discouraged to use Erlang distribution in a geographically distributed system. The Erlang distribution was designed for communication within a data center or preferably within the same rack in a data center. For geographically distributed systems other asynchronous communication patterns are suggested.
Not clear why they make this claim, but I think it refers to how Erlang/OTP handles distribution out of the box. Tools like Partisan seem to provide better defaults: https://github.com/lasp-lang/partisan
toast0 · 6h ago
I've run dist cross datacenters. Dist works, but you need to have excellent networking or you will have exciting times.
It's pretty clear, IMHO, that dist was designed for local networking scenarios. Mnesia in particular was designed for a cluster of two nodes that live in the same chassis. The use case was a telephone switch that could recover from failures and have its software updated while in use.
That said, although OTP was designed for a small use case, it still works in use cases way outside of that. I've run dist clusters with thousands of nodes, spread across the US, with nodes on east coast, west coast and Texas. I've had net_adm:ping() response times measured in minutes ... not because the underlying latency was that high, but because there was congestion between data centers and the mnesia replication backlog was very long (but not beyond the dist and socket buffers) ... everything still worked, but it was pretty weird.
Re Partisan, I don't know that I'd trust a tool that says things like this in their README:
> Due to this heartbeating and other issues in the way Erlang handles certain internal data structures, Erlang systems present a limit to the number of connected nodes that depending on the application goes between 60 and 200 nodes.
The amount of traffic used by heartbeats is small. If managing connections and heartbeats for connections to 200 other nodes is not small for your nodes, your nodes must be very small ... you might ease your operations burden by running fewer but larger nodes.
I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now. I want to say on the order of hundreds of thousands. While I was at WhatsApp, we were having issues with things like pg2 that used the global module to do cluster wide locking. If those locks weren't acquired very carefully, it was easy to get into livelock when you had a large cluster startup and every node was racing to take the same lock to do something. That sort of thing is dangerous, but after you hit it once, if you hit it again, you know what to hammer on, and it doesn't take too long to fix it.
Either way, someone who says you can't run a 200 node dist cluster is parroting old wives tales, and I don't trust them to tell you about scalability. Head of line blocking can be an issue in dist, but one has to be very careful to avoid breaking causality if you process messages out of order. Personally, I would focus on making your TCP networking rock solid, and then you don't have to worry about head of line blocking very often.
That said, to answer this from earlier in the thread
> I have read the erlang/OTP doesn’t work well in high latency environments (for example on a mobile device), is that true? Are there special considerations for running OTP across a WAN?
OTP dist is built upon the expectation that a TCP connection between two nodes can be maintained as long as both nodes are running. If that expectation isn't realistic for your network, you'll probably need to use something else, whether that's a custom dist transport, or some other application protocol.
For mobile ... I've seen TCP connections from mobile devices stay connected upwards of 60 days, but it's not very common, iOS and Android aren't built for it. But that's not really an issue, because the bigger issue is Dist has no security barriers. If someone is on your dist, they control all of the nodes in your cluster. There is no way that's a good idea for a phone to be connected into, especially if it's a phone you don't control, that's running an app you wrote to connect to your service --- there's no way to prevent someone from taking your app, injecting dist messages and spawning whatever they want on your server... that's what you're inviting if you use dist.
This application is running dist between BEAM on the phone and Swift on the phone, so lack of a security barrier is not a big issue, and there shouldn't be any connectivity issues between the two sides (other than if it's hard to arrange for dist to run on a unix socket or something)
That said, I think Erlang is great, and if you wanted to run OTP on your phone, it could make sense. You'd need to tune runtime/startup, and you'd need to figure out some way to do UX, and you'd need to be OK with figuring out everything yourself, because I don't think there's a lot of people with experience running BEAM on Android. And you'd need to be ok with hiring people and training them on your stack.
__turbobrew__ · 10h ago
Thanks for getting the quote
carson-katri · 15h ago
In our use-case, we're running the client and server on the same device. But if you're connecting a mobile device to a "server" node, you would probably want to be careful how you link processes and avoid blocking on any calls to the client.
rozap · 15h ago
This is neat. I'm not a swift user, but I did work on a project where we made heavy use of JInterface (which ships in OTP), which is effectively the same thing but for JVM languages. It worked great and allowed easy reuse of a bunch of Java libraries we already had. Pretty painless interop model, imo.
andy_ppp · 5h ago
This is fantastic, you can write 99% of your system in Elixir and then if you need crazy performance you can write a GenServer in Swift that’ll give you near C/Go performance.
innocentoldguy · 4h ago
Elixir and Erlang outperform Go when under sustained load, so it may not be necessary to write a GenServer in Swift to achieve Go's performance.
The top four areas where I've seen Elixir and Erlang outshine Go are concurrent workloads, memory management, fault-tolerance, and distributed systems.
andy_ppp · 1h ago
It depends what you are doing doesn't it. Maybe with Nx, Axon and EXLA most of the concerns are no longer an issue. However, there will always be some cases where doing things in lower level, mutable languages is faster.
nikolayasdf123 · 5h ago
so can iOS devices (say iPhones) join Erlang actor cluster now? can someone explain?
DeepYogurt · 10h ago
Very cool work.
cyberax · 16h ago
One thing for which I can't get a clear answer: Swift uses automatic reference counting. It seems that it needs atomic operations that are expensive.
Does this allow somehow to sidestep this? Since the data is all thread-local, it should be possible to use non-atomic counters?
liuliu · 16h ago
Only "class" object is reference counted (to some extents, class-like objects too). Int / struct (value-semantics) objects are not reference counted. These are copied eagerly.
Swift introduced bunch of ownership keywords to help you to use value objects for most of the needs to sidestep reference-counting and minimize copying.
Of course, to my understanding, "actor" in Swift is a "class"-like object, so it will be reference-counted. But I fail to see how that is different from other systems (as actor itself has to be mutable, hence, a reference object anyway).
brandonasuncion · 16h ago
And for times you need a fast heap-allocated type, Swift's Noncopyable types have been pretty great in my experience. Especially so for graph data structures, where previously retains/releases would be the biggest slowdown.
“BRC is based on the observation that most objects are only accessed by a single thread, which allows most RC operations to be performed non-atomically. BRC leverages this by biasing each object towards a specific thread, and keeping two counters for each object --- one updated by the owner thread and another updated by the other threads. This allows the owner thread to perform RC operations non-atomically, while the other threads update the second counter atomically.“
(I don’t know whether Swift uses this at the moment)
The Swift compiler does lifetime and ownership analysis to eliminate many/most ARC overhead, beyond when things are truly shared between threads and the like.
I'm not sure how it can detect that outside of trivial cases. Any object that is passed into a library function can escape the current thread, unless the compiler can analyze all the binary at once.
slavapestov · 13h ago
> Any object that is passed into a library function can escape the current thread,
In Swift 6 this is only true if the value’s type is Sendable.
llm_nerd · 13h ago
Many (and increasingly most) Swift libraries are largely Swift modules delivered as SIL (Swift Intermediate Language). The compiler can indeed trace right into those calls and determine object lifetime and if it escapes. It is far more comprehensive than often presumed.
Though the vast majority of cases where ARC would come into play are of the trivial variety.
`<div>Hello, world!!</div>`
we can do:
`<Text>Hello, world!</Text>`
I want to be clear: this is not a web renderer. We are not rendering HTML. We're rendering actual native UI. So the above in SwiftUI becomes:
`Text("Hello, world!")`
And yes we support modifiers via a stylesheet system, events, custom view registration, and really everything that you would normally be doing it all in Swift.
Where this library comes into play: the headless browser is being built in Elixir to run on device. We communicate with the SwiftUI renderer via disterl. We've built a virtual DOM where each node in the vDOM will have its own Erlang process. (I can get into process limit for DOMs if people want to) The Document will communicate the process directly to the corresponding SwiftUI view.
We've taken this a step further by actually compiling client-side JS libs to WASM and running them in our headless browser and bridging back to Elixir with WasmEx. If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework. So think of actual native targets for Hotwire, LiveWire, etc...
We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
This originally started as the LiveView Native project but due to some difficulties collaborating with the upstream project we've decided to broaden our scope.
Swift's portability means we should be able to bring this to other languages as well.
We're nearing the point of integration where we can benchmark and validate this effort.
Happy to answer any questions!
You appear to be saying this with a straight face. I must be missing something here. What is beneficial about the web model that native is lacking?
I hope I’m not being an old curmudgeon, but I’m genuinely confused here. To me, web dev is a lovecraftian horror and I’m thankful everyday I don’t have to deal with that.
Native dev is needlessly fragmented and I’ve longed for a simple (not Qt) framework for doing cross platform native app dev with actual native widgets, so thanks for working on that. But I a bit mystified at the idea of making it purposefully like web dev.
My output looked exactly like an embedded WebKit UIView though - so then the problem became - what was I making that was appreciably better?
Holy this will be much bigger than I thought! Cant wait to see it out.
The writing was more or less on the wall with WASM. I don't know if this project is really The Answer that will solve all of the problems but it sounds like a step in that direction and I like it a lot, despite using neither Swift nor Erlang.
Could you please elaborate on the statement about Apple Watch? Apple Watch can connect to WiFi directly with Bluetooth off on its paired iPhone. Specific variants also support cellular networks directly without depending on the paired iPhone. So is it something more nuanced than the networking part that’s missing in Apple Watch?
I reckon the lack of general-purpose websockets is probably the issue for a system based on Phoenix LiveView.
Didn’t Firefox build its UI in XAML long ago?
https://en.m.wikipedia.org/wiki/Extensible_Application_Marku...
https://news.ycombinator.com/item?id=8730903
damn
How is it different from Lynx? React Native? (probably is, besides the xml like syntax, again state management?)
Quite interesting !
As far as the differentiator: backend. If you're sold on client-side development then I don't think our solution is for you. If however you value SSR and want a balnance between front end and backend that's our market. So for a Hotwire app you could have a Rails app deployed that can accept a "ACCEPT application/swiftui" and we can send the proper template to the client. Just like the browser we parse and build the DOM and insantiate the Views in the native client. There are already countless examples of SSR native apps in the AppStore. As long as we aren't shipping code it's OK, which we're not. Just markup that represents UI state. The state would be managed on the server.
Another areas we differ is that we target the native UI framework, we don't have a unified UI framework. So you will need to know HTML - web, SwiftUI - iOS, Jetpack Compose - Android. This is necessary to establish the primitives that we can hopefully get to the point to build on top of to create a unified UI framework (or maybe someone solves that for us?)
With our wasm compilation, we may even be able to compile React itself and have it emit native templates. No idea if that would work or not. The limits come when the JS library itself is enforcing HTML constraints that we don't observe, like case sensitive tag names and attributes.
What about offline mode? Well for use cases that don't require it you're all set. We have lifecycle templates that ship on device for different app states, like being offline. If you want offline we have a concept that we haven't implemented yet. For Elixir we can just ship a version of the LV server on device that works locally then just does a datasync.
And yes you can hot load code to modify the application. As long as you don't alter the purpose or scope of features under review. There is a specific callout as well that you can dynamically load in "casual games" from a community of contributing creators.
You're repeating outdated nonsense from over a decade ago! Understanding current App Store guidelines can be key for finding competitive edge when there are so many like yourself who scare devs off doing things that Apple now allows.
* our vDOM: https://github.com/liveview-native/gen_dom
* selector parsing: https://github.com/liveview-native/selector
* compile Elixir to ios: https://github.com/otp-interop/elixir_pack
Years ago I worked at Xamarin, and our C# compiler compiled C# to native iOS code but there were some features that we could not support on iOS due to Apple's restrictions. Just curious if Apple still has those restrictions or if you're doing something different?
we compile without the JIT so we can satisfy the AppStore requirements
Can you elaborate on this?
It's pretty well-established at this time that cross-platform development frameworks are hard for pretty much any team to accomplish... Is work winding down on the LiveView Native project, or do you expect to see an increase in development?
What is changing is how the client libraries are built. I mentioned in another comment that we're building a headless web browser, if you haven't read it I'd recommend it as it gives a lot of detail on what we're attempting to do. Right now we've more or less validated every part with the exception of the overall render performance. This effort replaces LVN Core which was built in Rust. The Rust effort used UniFFI to message pass to the SwiftUI client. Boot time was also almost instant. With The Elixir browser we will have more overhead. Boot time is slower and I believe disterl could carry over overhead than UniFFI bindings. However, the question will come down to if that overhead is significant or not. I know it will be slower, but if the overall render time is still performant then we're good.
The other issue we ran into was when we started implementing more complex LiveView things like Live Components. While LVN Core has worked very well its implementation I believe was incorrect. It had passed through four developers and was originally only intended to be a template parser. It grew with how we were figuring out what the best path forward should be. And sometimes that path meant backing up and ditching some tech we built that was a dead end for us. Refactoring LVN Core into a browser I felt was going to take more time than doing it in Elixir. I built the first implementation in about a week but the past few months has been spent on building GenDOM. That may still take over a year but we're prioritizing the DOM API that LiveView, Hotwire, and Livewire will require. Then the other 99% of DOM API will be a grind.
But to your original point, going the route of the browser implementation means we are no longer locked into LiveView as we should be able to support any web client that does similar server/client side interactivity. This means our focus will be no longer on LiveView Native individually but ensuring that the browser itself is stable and can run the API necessary for any JS built client to run on.
I don't think we'd get to 100% compatibility with LiveView itself without doing this.
> Even though Erlang’s asynchronous message-passing model allows it to handle network latency effectively, a process does not need to wait for a response after sending a message, allowing it to continue executing other tasks. It is still discouraged to use Erlang distribution in a geographically distributed system. The Erlang distribution was designed for communication within a data center or preferably within the same rack in a data center. For geographically distributed systems other asynchronous communication patterns are suggested.
Not clear why they make this claim, but I think it refers to how Erlang/OTP handles distribution out of the box. Tools like Partisan seem to provide better defaults: https://github.com/lasp-lang/partisan
It's pretty clear, IMHO, that dist was designed for local networking scenarios. Mnesia in particular was designed for a cluster of two nodes that live in the same chassis. The use case was a telephone switch that could recover from failures and have its software updated while in use.
That said, although OTP was designed for a small use case, it still works in use cases way outside of that. I've run dist clusters with thousands of nodes, spread across the US, with nodes on east coast, west coast and Texas. I've had net_adm:ping() response times measured in minutes ... not because the underlying latency was that high, but because there was congestion between data centers and the mnesia replication backlog was very long (but not beyond the dist and socket buffers) ... everything still worked, but it was pretty weird.
Re Partisan, I don't know that I'd trust a tool that says things like this in their README:
> Due to this heartbeating and other issues in the way Erlang handles certain internal data structures, Erlang systems present a limit to the number of connected nodes that depending on the application goes between 60 and 200 nodes.
The amount of traffic used by heartbeats is small. If managing connections and heartbeats for connections to 200 other nodes is not small for your nodes, your nodes must be very small ... you might ease your operations burden by running fewer but larger nodes.
I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now. I want to say on the order of hundreds of thousands. While I was at WhatsApp, we were having issues with things like pg2 that used the global module to do cluster wide locking. If those locks weren't acquired very carefully, it was easy to get into livelock when you had a large cluster startup and every node was racing to take the same lock to do something. That sort of thing is dangerous, but after you hit it once, if you hit it again, you know what to hammer on, and it doesn't take too long to fix it.
Either way, someone who says you can't run a 200 node dist cluster is parroting old wives tales, and I don't trust them to tell you about scalability. Head of line blocking can be an issue in dist, but one has to be very careful to avoid breaking causality if you process messages out of order. Personally, I would focus on making your TCP networking rock solid, and then you don't have to worry about head of line blocking very often.
That said, to answer this from earlier in the thread
> I have read the erlang/OTP doesn’t work well in high latency environments (for example on a mobile device), is that true? Are there special considerations for running OTP across a WAN?
OTP dist is built upon the expectation that a TCP connection between two nodes can be maintained as long as both nodes are running. If that expectation isn't realistic for your network, you'll probably need to use something else, whether that's a custom dist transport, or some other application protocol.
For mobile ... I've seen TCP connections from mobile devices stay connected upwards of 60 days, but it's not very common, iOS and Android aren't built for it. But that's not really an issue, because the bigger issue is Dist has no security barriers. If someone is on your dist, they control all of the nodes in your cluster. There is no way that's a good idea for a phone to be connected into, especially if it's a phone you don't control, that's running an app you wrote to connect to your service --- there's no way to prevent someone from taking your app, injecting dist messages and spawning whatever they want on your server... that's what you're inviting if you use dist.
This application is running dist between BEAM on the phone and Swift on the phone, so lack of a security barrier is not a big issue, and there shouldn't be any connectivity issues between the two sides (other than if it's hard to arrange for dist to run on a unix socket or something)
That said, I think Erlang is great, and if you wanted to run OTP on your phone, it could make sense. You'd need to tune runtime/startup, and you'd need to figure out some way to do UX, and you'd need to be OK with figuring out everything yourself, because I don't think there's a lot of people with experience running BEAM on Android. And you'd need to be ok with hiring people and training them on your stack.
The top four areas where I've seen Elixir and Erlang outshine Go are concurrent workloads, memory management, fault-tolerance, and distributed systems.
Does this allow somehow to sidestep this? Since the data is all thread-local, it should be possible to use non-atomic counters?
Swift introduced bunch of ownership keywords to help you to use value objects for most of the needs to sidestep reference-counting and minimize copying.
Of course, to my understanding, "actor" in Swift is a "class"-like object, so it will be reference-counted. But I fail to see how that is different from other systems (as actor itself has to be mutable, hence, a reference object anyway).
Example here: https://forums.swift.org/t/noncopyable-generics-in-swift-a-c...
An added plus is that the Swift compiler seems to stack-promote a lot more often, compared to class/ManagedBuffer implementations.
See https://dl.acm.org/doi/10.1145/3243176.3243195:
“BRC is based on the observation that most objects are only accessed by a single thread, which allows most RC operations to be performed non-atomically. BRC leverages this by biasing each object towards a specific thread, and keeping two counters for each object --- one updated by the owner thread and another updated by the other threads. This allows the owner thread to perform RC operations non-atomically, while the other threads update the second counter atomically.“
(I don’t know whether Swift uses this at the moment)
https://stackoverflow.com/questions/25542416/swift-with-no-a...
https://g.co/gemini/share/51670084cd0f - lame, but it references core concepts.
In Swift 6 this is only true if the value’s type is Sendable.
Though the vast majority of cases where ARC would come into play are of the trivial variety.