Queries and chats can also include uploaded images with the images
argument.
ollamar
The ollamar package starts up similarly, with a test_connection()
function to check that R can connect to a running Ollama server, and pull("the_model_name")
to download the model such as pull("gemma3:4b") or pull("gemma3:12b")
.
The generate()
function generates one completion from an LLM and returns an httr2_response
, which can then be processed by the resp_process()
function.
library(ollamar)
resp <- generate("gemma2", "What is ggplot2?")
resp_text <- resp_process(resp)
Or, you can request a text response directly with a syntax such as resp <- generate("gemma2", "What is ggplot2?", output = "text"
). There is an option to stream the text with stream = TRUE
:
resp <- generate("gemma2", "Tell me about the data.table R package", output = "text", stream = TRUE)
ollamar has other functionality, including generating text embeddings, defining and calling tools, and requesting formatted JSON output. See details on GitHub.
rollama was created by Johannes B. Gruber; ollamar by by Hause Lin.
Roll your own
If all you want is a basic chatbot interface for Ollama, one easy option is combining ellmer, shiny, and the shinychat package to make a simple Shiny app. Once those are installed, assuming you also have Ollama installed and running, you can run a basic script like this one:
library(shiny)
library(shinychat)
ui <- bslib::page_fluid(
chat_ui("chat")
)
server <- function(input, output, session) {
chat <- ellmer::chat_ollama(system_prompt = "You are a helpful assistant", model = "phi4")
observeEvent(input$chat_user_input, {
stream <- chat$stream_async(input$chat_user_input)
chat_append("chat", stream)
})
}
shinyApp(ui, server)
That should open an extremely basic chat interface with a model hardcoded. If you don’t pick a model, the app won’t run. You’ll get an error message with the instruction to specify a model along with those you’ve already installed locally.
I’ve built a slightly more robust version of this, including dropdown model selection and a button to download the chat. You can see that code here.
Conclusion
There are a growing number of options for using large language models with R, whether you want to add functionality to your scripts and apps, get help with your code, or run LLMs locally with ollama. It’s worth trying a couple of options for your use case to find one that best fits both your needs and preferences.