The BitNet docker image has been updated to support both llama-server and llama-cli in Microsoft's inference framework.
It had been updated to support just the llama-server, but turns out cnv/instructional mode isn't supported in the server only CLI mode, so support for CLI has been reintroduced enabling you to chat with many BitNet processes in parallel with an improved conversational mode (where as server responses were less coherent).
TL;DR: The updated extension simplifies fetching/running the FastAPI-BitNet docker container which enables initializing & then chatting with many local llama BitNet processes (conversational CLI & non-conversational server) from within the VSCode copilot chat panel for free.
I was able to run about 100 BitNet CLI processes before the additional processes started getting moved to SSD page swap file instead of running in RAM. How many do you think you could run on your computer?
It had been updated to support just the llama-server, but turns out cnv/instructional mode isn't supported in the server only CLI mode, so support for CLI has been reintroduced enabling you to chat with many BitNet processes in parallel with an improved conversational mode (where as server responses were less coherent).
Links:
https://marketplace.visualstudio.com/items?itemName=nftea-ga...
https://github.com/grctest/BitNet-VSCode-Extension
https://github.com/grctest/FastAPI-BitNet
TL;DR: The updated extension simplifies fetching/running the FastAPI-BitNet docker container which enables initializing & then chatting with many local llama BitNet processes (conversational CLI & non-conversational server) from within the VSCode copilot chat panel for free.
I was able to run about 100 BitNet CLI processes before the additional processes started getting moved to SSD page swap file instead of running in RAM. How many do you think you could run on your computer?