Nairobi News

GeneralHustleMust ReadNewsWhat's Hot

UlizaLlama: Soon AI will converse with you in Kiswahili


When the Nigerian government announced plans in April to develop a multilingual AI tool to boost digital inclusion across the West African nation, 28-year-old computer science student Lwasinam Lenham Dilli was thrilled.

Dilli had struggled to scrape datasets from the internet to build a large language model (LLM), used to power AI chatbots, in his native Hausa language as part of his final-year project at university. “(Creating local language LLMs) is a way to ensure that our local dialects and languages will not be forgotten or left out of the AI ecosystem,” Dilli told the Thomson Reuters Foundation.

The world has been swept up in a whirlwind of AI mania, with tools such as OpenAI’s ChatGPT, Meta’s Llama 2, and Mistral AI captivating millions globally with their ability to generate human-like text.

But for many tech-savvy Africans, the excitement has been tempered by a frustrating reality: when languages like Hausa, Amharic, or Kinyarwanda are entered into the chat, many of these advanced systems falter, often producing nonsensical responses. Technology experts warn the lack of LLMs in African languages will lead to the exclusion of millions of people on the continent, increasing both the digital and economic divide.

The Nigerian government-led initiative to develop a multilingual LLM aims to level the playing field. “The LLM will be trained on five low-resource languages and accented English to ensure stronger language representation … for development of artificial intelligence solutions,” said Nigeria’s Digital Economy Minister Bosun Tijani in April.

The government will partner with Nigerian AI startups, and local data will be collected by volunteers who are fluent in any of five Nigerian languages: Yoruba, Hausa, Igbo, Ibibio, and West African lingua franca—Pidgin. To build the model, the project will also draw on the expertise of more than 7,000 fellows from Nigeria’s tech talent programme.

Africa is home to more than 2,000 languages spoken across 54 countries, according to the United Nations Educational, Scientific and Cultural Organisation. However, the majority of African languages remain underrepresented on the internet.

English dominates the digital space, accounting for around 50 percent of all websites, followed by Spanish, German, Japanese, and French. Along with the Nigerian government initiative, there are also a small but growing number of African startups rising to the challenge of developing AI tools in languages like Kiswahili, Amharic, Zulu and Sesotho. In Kenya, for instance, health tech firm Jacaranda Health has pioneered the first LLM operating in Kiswahili to improve maternal healthcare in East Africa.

Built on Meta’s Llama 3 system, UlizaLlama (AskLlama) aims to refine Jacaranda Health’s SMS service for low-income Kiswahili-speaking expectant mothers who have queries ranging from dietary concerns and foetal movement to exercise during pregnancy.

The platform currently provides pre-written automated responses, but once UlizaLlama is integrated by the end of June, it will tailor responses to individual needs, offering more detailed pregnancy guidance and emergency support. “A lot of these expectant moms can’t do a Google search. UlizaLlama’s goal is to make sure that we get them the accurate answers in the fastest possible time,” Jay Patel, Jacaranda Health’s director of technology, told the Thomson Reuters Foundation.

In South Africa, the Masakhane initiative is using open-source machine learning to translate African languages. Lelapa AI, a South African AI research lab, has pioneered VulaVula — a for-profit language processing tool that translates, transcribes and analyses languages in English, Afrikaans, Zulu and Sesotho.

But AI experts say building LLMs in African languages poses significant challenges, ranging from availability of data to ethical concerns over consent, compensation and copyright. Many African languages are low-resource languages, meaning there is a scarcity of data to train these models effectively — unlike high-resource languages such as English or French.

Michael Michie, co-founder of Everse Technology Africa, an AI startup building intelligence into data protection and privacy, said collecting the data needed to train LLMs also raised ethical questions.

In many African communities, oral tradition predominates, and certain communities may not be interested in sharing their language to train LLMs and this should be respected. Open-source initiatives like Creative Commons, which allow creators to legally share their work with specified conditions like ensuring attribution and non-commercial use, are also not a perfect solution, said some AI experts.

“There’s this push of saying everything should just be under Creative Commons,” said Vukosi Marivate, associate professor of computer science at the University of Pretoria and co-founder of Lelapa AI. But if everything is open source, it may be harder to properly reimburse and acknowledge the original contributors to these language models, he said. “A lot of people are working on LLMs now because of the prestige, that’s where the money is, but we need to make sure that our languages are actually being taken care of.”

Read more