Limitations

Limitations of Owlracle

  • Owlracle, just like many retrieval-augmented LLM will not perform as well as a LLM that is pre-trained on the database. One phenomenon is that as the number of database increases, the retrieval quality could deteriorate . We address this by having a specific way of prompt engineering and limit our number of database under 5 (Courses, Events, Clubs, Faculties, FAQ).
  • Similarly, Owlracle’s reasoning capabilities are limited by the bottleneck of retrieval context length and retrieval quality. Hypothetically if the retrieval data were internalized/pre-trained by the LLM, Owlracle could answer questions that involve complex planning by utilizing its pre-trained knowledge. We are working on addressing this problem by fine-tuning Llama-2(join us here (opens in a new tab)) which is an open-sourced LLM that is equally powerful for many of our tasks.
  • Owlracle is GPT-3.5-turbo powered so will not have access to the internet. Therefore strictly speaking, it doesn't know the latest resources/news. We alleviate this issue by involving another LLM called Perplexity AI. Perplexity AI will do due diligence everynight to add the latest knowledge to vector DB.
  • An LLM alone cannot recommend resources unless the user specifically asked for them. However, to help resource discovery, Owlracle needs to recommend resources to users in need. So Owlracle needs to identify the need of user and what resources are relevant. We currently implemented the simplest recommendation system, i.e. randomized. We are working on building a recommendation based on users' profile and interests. This makes sure useful resources will actually be seen.