Go to file
Green Sky be58c0cece
add online users to prompt
2024-02-07 11:25:22 +01:00
external remove explicit entt dep 2024-02-04 15:13:41 +01:00
plugins make llama server configurable 2024-01-26 15:47:02 +01:00
src add online users to prompt 2024-02-07 11:25:22 +01:00
.gitignore simple llama.cpp server api usage works 2024-01-22 21:14:33 +01:00
CMakeLists.txt simple llama.cpp server api usage works 2024-01-22 21:14:33 +01:00
README.md add online users to prompt 2024-02-07 11:25:22 +01:00

README.md

llama.cpp webserver based api + RPBot

used llama embedded webserver api for easier portability

LlamaCppWeb config options

  • LlamaCppWeb, server
    • type: string
    • host + port
    • default: localhost:8080

RPBot config options

TODO: move rpbot to own repo

  • RPBot, system_prompt(, opt contact ID)

    • type: string
    • System prompt that is prefixed
    • can contain specific formatters:
      • {self_name} username for specified chat
      • {online_users} coma seperated list of online users
    • default: Transcript of a group chat, where {self_name} talks to online strangers.\n{self_name} is creative and curious. {self_name} is writing with precision, but also with occasional typos.\n
  • RPBot, min_messages(, opt contact ID)

    • type: int
    • TODO: implement per ID
    • default: 4
  • RPBot, max_cont_messages(, opt contact ID)

    • type: int
    • TODO: implement per ID
    • default: '4'
  • RPBot, max_interactive_delay(, opt contact ID)

    • type: float
    • TODO: implement per ID
    • default: 4.0

example config

{
	"LlamaCppWeb": {
		"server": "localhost:8080"
	},
	"RPBot": {
		"min_messages": 1,
		"system_prompt":  "A group chat with an artificial intelligence assistant called {self_name}. The assistant gives helpful and detailed answers to the chat participants's questions. The assistant has it's own opinions. {self_name} only ever answers when explicitly asked.\n"
		}
	}
}