I really like Jan, especially the organization's principles: https://jan.ai/
Main deal breaker for me when I tried it was I couldn't talk to multiple models at once, even if they were remote models on OpenRouter. If I ask a question in one chat, then switch to another chat and ask a question, it will block until the first one is done.
Also Tauri apps feel pretty clunky on Linux for me.
diggan · 21h ago
> Also Tauri apps feel pretty clunky on Linux for me.
All of them, or this one specifically? I've developed a bunch of tiny apps for my own usage (on Linux) with Tauri (maybe largest is just 5-6K LoC) and always felt snappy to me, mostly doing all the data processing with Rust then the UI part with ClojureScript+Reagent.
_the_inflator · 20h ago
Yep. I really see them as an architecture blueprint with a reference implementation and not so much as a one size fits all app.
I stumbled upon Jan.ai a couple of months ago when I was considering a similar app approach. I was curious because Jan.ai went way beyond what I considered to be limitations.
I haven’t tried Jan.ai yet, I see it as an implementation not a solution.
signbcc · 18h ago
> especially the organization's principles
I met the team late last year. They’re based out of Singapore and Vietnam. They ghosted me after promising to have two follow-up meetings, and were unresponsive to any emails, like they just dropped dead.
Principles and manifestos are a dime a dozen. It matters if you live by them or just have them as PR pieces. These folks are the latter.
dcreater · 16h ago
With a name like Menlo research, I assumed they were based in Menlo park. They probably intended that
inkyoto · 7h ago
> Main deal breaker for me when I tried it was I couldn't talk to multiple models at once […]
… which seems particularly strange considering the size of the cloned GitHub repository to be 1.8GiB which swells up to 4.8GiB after running «make build» – I tried to build it locally (which failed anyway).
It is startling that a relatively simple UI frontend can add 3Gb+ of build artefacts alone – that is the scale of a Linux kernel build.
c-hendricks · 21h ago
Yeah, webkit2gtk is a bit of a drag
roscas · 1d ago
Tried to run Jan but it does not start llama server. It also tries to allocate 30gb that is the size of the model but my vram is only 10gb and machine is 32gb, so it does not make sense. Ollama works perfect with 30b models.
Another thing that is not good is that it make constant connections to github and other sites.
hoppp · 21h ago
It probably loads the entire model into ram at once while ollama solves this and does not, it has a better loading strategy
blooalien · 15h ago
Yeah, if I remember correctly, Ollama loads models in "layers" and is capable of putting some layers in GPU RAM and the rest in regular system RAM.
SilverRubicon · 22h ago
Did you see the feature list? It does not deny that makes connections to other sites.
- Cloud Integration: Connect to OpenAI, Anthropic, Mistral, Groq, and others
- Privacy First: Everything runs locally when you want it to
jwildeboer · 18h ago
My name is Jan and I am not an AI thingy. Just FTR. :)
underlines · 18h ago
Jan here too, and I work with LLMs full time and I'm a speaker about these topics. Annoying how many times people ask me if Jan.ai is me lol
dsp_person · 18h ago
We need a steve.ai
ithkuil · 13h ago
I want a Robert Duck AI
tough · 8h ago
We're the AI's Robert's Ducks
mathfailure · 22h ago
Is this an alternative to OpenWebUI?
apitman · 21h ago
Not exactly. OWUI is a server with a web app frontend. Jan is a desktop app you install. But it does have the ability to run a server for other apps like OWUI to talk to.
ekianjo · 20h ago
Openweb-ui does not include a server.
cristoperb · 16h ago
It starts a webserver to serve its UI, which is what your comment parent meant. It doesn't provide its own openai-style API, which I guess is what you meant.
apitman · 20h ago
I was referring to Jan.
PeterStuer · 17h ago
More an alternative to LM Studio I think from the description.
apitman · 15h ago
Jan also supports connecting to remote APIs (like OpenRouter), which I don't think LM Studio does
klausa · 19h ago
So this is how women named Siri felt in 2011.
lagniappe · 18h ago
Hello Jan ;)
reader9274 · 20h ago
Tried to run the gpt-oss:20b in ollama (runs perfectly) and tried to connect ollama to jan but it didn't work.
accrual · 16h ago
I got Jan working with Ollama today. Jan reported it couldn't connect to my Ollama instance on the same host despite it working fine for other apps.
I captured loopback and noticed Ollama returning an HTTP 403 forbidden message to Jan.
The solution was set environment variables:
OLLAMA_HOST=0.0.0.0
OLLAMA_ORIGINS=*
Here's the rest of the steps:
- Jan > Settings > Model Providers
- Add new provider called "Ollama"
- Set API key to "ollama" and point to http://localhost:11434/v1
- Ensure variables above are set
- Click "Refresh" and the models should load
Note: Even though an API key is not required for local Ollama, Jan apparently doesn't consider it a valid endpoint unless a key is provided. I set mine to "ollama" and then it allowed me to start a chat.
Huh, yeah it looks like the GUI component is closed source. Their GitHub version only has the CLI.
diggan · 21h ago
I think at this point it's fair to say that most of the stuff Ollama does, is closed source. AFAIK, only the CLI is open source, everything else isn't.
conradev · 19h ago
Yeah, and they’re also on a forked llama.cpp
kanestreet · 10h ago
Yup. They have not even acknowledged the fact that it’s closed, despite a ton of questions. Ppl are downloading it assuming it’s open source only to get a nasty surprise. No mention of it in their blog post announcing the GUI. Plus no new license for it. And no privacy policy. Feels deceptive.
accrual · 16h ago
I have been using the Ollama GUI on Windows since release and appreciated its simplicity. It recently received an update that puts a large "Turbo" button in the message box that links to a sign-in page.
I'm trying Jan now and am really liking it - it feels friendlier than the Ollama GUI.
dcreater · 16h ago
And ollamas founder was on here posting that they are still focused on local inference... I don't see ollama as anything more than a funnel for their subscription now
numpad0 · 5h ago
I truly don't understand why it's supposed to be the end of the world. They need to monetize eventually, and simultaneously its userbase desireg good inference. It looks a complete win-win to me. Anyone can fork it in case they actually turn evil once it'd happen.
I mean, it's not like people enjoy lovely smell of cash burning and bias opinions heavily towards it, or is it like that?
semessier · 21h ago
still looking for vLLM to support Mac ARM Metal GPUs
baggiponte · 20h ago
Yeah. The docs tell you that you should build it yourself, but…
tough · 8h ago
but unlike cuda there's no custom kernels for inference in vllm repo...
I think
bogdart · 22h ago
I tried Jan last year, but the UI was quite buggy. But maybe they fixed it.
diggan · 21h ago
Please do try it out again, if things used to be broken but they no longer are, it's a good signal that they're gaining stability :) And if it's still broken, even better signal that they're not addressing bugs which would be worse.
esafak · 20h ago
So you're saying bugs are good?!
diggan · 20h ago
No, but maybe that their shared opinion will be a lot more insightful if they provide a comparison between then and now, instead of leaving it at "it was like that before, now I don't know".
venkyvb · 20h ago
How does this compare to LM studio ?
rmonvfer · 19h ago
I use both and Jan is basically the OSS version of LM Studio with some added features (e.g, you can use remote providers)
I first used Jan some time ago and didn’t really like it but it has improved a lot so I encourage everyone to try it, it’s a great project
angelmm · 19h ago
For me, the main difference is that LM Studio main app is not OSS. But they are similar in terms of features, although I didn't use LM Studio that much.
Main deal breaker for me when I tried it was I couldn't talk to multiple models at once, even if they were remote models on OpenRouter. If I ask a question in one chat, then switch to another chat and ask a question, it will block until the first one is done.
Also Tauri apps feel pretty clunky on Linux for me.
All of them, or this one specifically? I've developed a bunch of tiny apps for my own usage (on Linux) with Tauri (maybe largest is just 5-6K LoC) and always felt snappy to me, mostly doing all the data processing with Rust then the UI part with ClojureScript+Reagent.
I stumbled upon Jan.ai a couple of months ago when I was considering a similar app approach. I was curious because Jan.ai went way beyond what I considered to be limitations.
I haven’t tried Jan.ai yet, I see it as an implementation not a solution.
I met the team late last year. They’re based out of Singapore and Vietnam. They ghosted me after promising to have two follow-up meetings, and were unresponsive to any emails, like they just dropped dead.
Principles and manifestos are a dime a dozen. It matters if you live by them or just have them as PR pieces. These folks are the latter.
… which seems particularly strange considering the size of the cloned GitHub repository to be 1.8GiB which swells up to 4.8GiB after running «make build» – I tried to build it locally (which failed anyway).
It is startling that a relatively simple UI frontend can add 3Gb+ of build artefacts alone – that is the scale of a Linux kernel build.
- Cloud Integration: Connect to OpenAI, Anthropic, Mistral, Groq, and others
- Privacy First: Everything runs locally when you want it to
I captured loopback and noticed Ollama returning an HTTP 403 forbidden message to Jan.
The solution was set environment variables:
Here's the rest of the steps:- Jan > Settings > Model Providers
- Add new provider called "Ollama"
- Set API key to "ollama" and point to http://localhost:11434/v1
- Ensure variables above are set
- Click "Refresh" and the models should load
Note: Even though an API key is not required for local Ollama, Jan apparently doesn't consider it a valid endpoint unless a key is provided. I set mine to "ollama" and then it allowed me to start a chat.
No comments yet
Can't make it work with ollama endpoint
this seems to be the problem but they're not focusing on it: https://github.com/menloresearch/jan/issues/5474#issuecommen...
I'm trying Jan now and am really liking it - it feels friendlier than the Ollama GUI.
I mean, it's not like people enjoy lovely smell of cash burning and bias opinions heavily towards it, or is it like that?
I think
I first used Jan some time ago and didn’t really like it but it has improved a lot so I encourage everyone to try it, it’s a great project