As the world gradually uncovers the potential of artificial intelligence, language models have long been a significant focus within scientific research. While models such as GPT, Claude, and Mistral have demonstrated their ability to perform a wide variety of tasks, they don’t come without flaws, and many studies continue to explore ways to enhance their capabilities.
In a recent study, the SEMIC team delved into the realm of domain adaptation, investigating how retraining Large Language Models (LLMs) with public sector data could enhance AI performance specifically for public sector-related tasks.