Changelog

Oct 21, 2024

v1.8.2

Fixed an issue on Windows where the sanctum.api process would not terminate properly after the app was closed.

Enhanced logging to improve debugging of file processing.

Oct 17, 2024

v1.8.1

Added Llama 3.2 models to Featured section.

Added "Share to Local Network" option for the Local Server.

Updated local server examples to work with latest version of openai libraries.

Updated llama.cpp to b3933 release.

Resolved the issue that prevented displaying more than 10 files on the File Manager page.

Oct 15, 2024

v1.8

We are excited to announce the release of Sanctum v1.8, featuring new tools and improvements!

Toolbox in Models Section

Explore our newly added section under Models, called Toolbox. It supports additional models that help interact with other LLMs.

  • Currently available: Files embedding.
  • Coming soon: Image to text extraction, Image object detection, and Audio transcription.
In-Chat Search

You can now search directly within your chats using the new in-chat search feature. Use Cmd+F on MacOS or Ctrl+F on Windows to search.

Frontend Refactoring

We’ve transitioned the front-facing UI part of the application to a new framework, significantly speeding up the user interface and enhancing overall performance.

Bug Fixes

Multiple minor bugs have been addressed to improve stability and user experience.

Sep 4, 2024

v1.7

We are pleased to announce the release of Sanctum v.1.7, featuring several technical improvements and new RAG capabilities.

File summarization (beta)

Summaries can now be generated for most types of supported files. To use this feature, include a single file in your request and either leave the prompt empty or write a prompt for summary (e.g. “Summarize this file”).

Postgres migration

We have transitioned from the FAISS vector database to Postgres, offering greater flexibility and supporting hybrid search for enhanced search output quality for file embeddings.

RAG framework migration

We’ve migrated from Langchain to LlamaIndex, which provides improved data indexing, search, and retrieval performance.

Other updates
  • Upgraded llama.cpp to b3652 release.

  • Upgraded Python to v3.12.

Aug 5, 2024

v1.6.1

Today we're excited to release Sanctum v1.6 with a new UI and a set of new features for the dev community.

New UI

With simplified navigation and updated interface Sanctum gets better, making room for even more exciting features.

Local API server

Powered by llama.cpp, our UI lets you run a local server with any GGUF model in seconds. Customize ports, server parameters, and monitor resources with ease.

Advanced model settings

Tweak local LLMs with custom prompt templates, hardware optimization, and inference parameters.

Bug fixes

Resolved an issue where some models recommendation tags were infinitely loading. Should now show unknown tag.

Jun 14, 2024

v1.5.1

Resolved an issue that prevented Sanctum from running properly on MacOS Intel.

Jun 10, 2024

v1.5

We are excited to share a new release update with new features, improvements, and bug fixes! Here’s what’s new:

File manager (beta)

This is the first step to our comprehensive RAG solution. View all imported files in a single place, use it to organize your workspace, and start chatting with files in a single click.

Message editing

Want to make edits to your prompt? Use the edit icon and get a new answer from the model.

App Improvements
  • Increased model context size in the model settings sidebar (e.g., Llama 3 8B now has a max context size of 8192 enabled by default).

  • Implemented caching and performance improvements to allow for much faster interactions and search on the Models page.

  • Added support for IBM Granite Code model preset & improved Mistral model preset.

Bug fixes
  • Fixed a bug with missing line breaks after submitting a message.

  • Fixed a bug where long chat names overlapped with the model dropdown on smaller resolutions.

  • Fixed an issue with generating chat titles.

  • Tweaked UI to avoid duplicating the left sidebar icon.

May 20, 2024

v1.4.5

Added support for deep links for opening Sanctum from HuggingFace.

May 14, 2024

v1.4.4

Updated llama.cpp to support pre-tokenizers in new models.

Resolved a bug that caused system prompt words (e.g., <context>) to occasionally appear in AI responses.

May 7, 2024

v1.4.3

Fixed bug with chat failing to process subsequent PDF attachments after initial OCR failure.

Added detailed logging to assist with troubleshooting application runtime issues.

Apr 30, 2024

v1.4.2

Fixed bug with app crashing on Windows 10.

Apr 29, 2024

v1.4.1

Added support for Microsoft's Phi-3-Mini model.

Added Account ID to Settings page.

Fixed bugs and made improvements.

Apr 23, 2024

v1.4

HuggingFace integration

Access any .GGUF open-source LLM on HuggingFace directly from the Sanctum app on your specific use case.

LLaMa 3 support

Download and run the latest LLaMa 3 8B model from Meta.

Hardware compatibility check

Quickly check whether your system's video memory meets the requirements to run specific AI models efficiently.

Improved “New Chat” interface

The new chat view allows for immediate adjustments to chat settings. Switch between model presets, modify hardware settings, and set the context length, all from one place.

Enhancements and bug fixes

Mar 5, 2024

v1.3.2

Bug fixes and improvements:
  • Improved CUDA installation process on Windows - no need to restart the app after installation.

  • Added available GPU memory on MacOS & Windows.

  • Fixed issue with running Sanctum on MacOS Intel.

  • Improved error handling on Windows & MacOS.

  • Minor bug fixes & improvements.

Feb 22, 2024

v1.3.1

Bug fixes and improvements:
  • Added support for Gemma 7B model from Google.

  • Tuned default GPU setting for Windows and MacOS.

  • Updated color scheme of the scroll line in the chats.

Feb 20, 2024

v1.3.0

Sanctum v1.3.0 now supports Windows, here’s what’s new:

Windows ready

Download and run Sanctum on Windows using CPU or GPU. Supports Nvidia CUDA 11 and 12 (support for AMD GPUs coming next).

PDF chat viewer

Seamlessly view PDFs in the Sanctum UI and see how your queries are answered from highlighted text.

Expanded file support

Added support for a variety of formats like .docx, .pptx, .js, .html, .css, and more!

Editable chat names

Rename chats directly from the sidebar or chat header.

In-chat file management

Easily unlink files from the chat to refine the model's response sources.

Sidebar flexibility

Toggle the sidebar for an uncluttered, focused chat view.

Jan 10, 2024

v1.2.1

In this release, we have made the following updates and improvements:

File drag & drop

Enhanced file interaction: Seamlessly drag and drop files directly into Sanctum for a more intuitive user experience.

Device usage information
  • Optimized layout: Relocated Memory Usage information from the left sidebar to beneath the input area for better visibility.

  • New addition: Introduced a CPU Load metric for real-time performance monitoring.

Model additions

Added support for Microsoft's Phi-2 model.

Bug fixes & general improvements

Various bug fixes and improvements to enhance the stability and performance of Sanctum.

Dec 21, 2023

v1.2.0

Sanctum leaps forward with a major upgrade 🚀

Introducing Sanctum Pro, a powerful enhancement to your Sanctum experience, packed with new and exciting features:

  • Secure, 100% Private PDF Chatting: Now chat, ask questions, and summarize PDF files in a secure and completely private environment.
  • Advanced Sanctum Vault Search: Effortlessly search through your encrypted chat history.
  • Early Access to New Features: Get exclusive early access to upcoming features and model updates.

Everyone on the Base plan will continue to enjoy all the essentials needed for private chatting.

We have also implemented additional updates and improvements, including a vital new feature:

Account Recovery: Secure your account with a unique 24-word recovery phrase. Use it to regain access if you forget your password or when migrating to a new device.

Dec 15, 2023

v1.1.2

Added support for Mixtral-8x7B (32GB+ memory recommended):

  • Mixtral-8x7B-Low-Specs (Q3_K_M)
  • Mixtral-8x7B-Low-Specs (Q4_K_M)
  • Mixtral-8x7B-Low-Specs (Q6_K)

Nov 20, 2023

v1.1.1

This update improves app performance for up to 5 times, introduces new features and minor tweaks.

  • Speed Boost: Our switch from .ggml to .gguf model format supercharges the app, expanding our model compatibility game.
  • Model Additions: Welcoming Mistral-7B, TinyLlama & Llama-2-13B.
  • Bookmarks: Save important chat messages, ensuring they're alwayswithin reach.
  • Enhanced Navigation: Seamless transitions with our new “Go back” and “Go forward” buttons.
  • Chat Grouping: Added “Recent” and “Older” groups for your chat history.
  • Model Icon:Easily identify models with distinct icons in the "New chat" dropdown and chat headers.

Oct 10, 2023

v1.0.0

Initial release.

Notable features include:

  • Chat with AI without the internet.
  • Launch Llama-2 7B LLM locally.
  • Maintain control of all your data.
  • Lock your data in Sanctum Vault - encrypted local database.