How to Build an AI Readiness Analyzer Agent on Pawa AI — A Step-by-Step Guide for the ITU HackathonAll Blogs

How to Build an AI Readiness Analyzer Agent on Pawa AI — A Step-by-Step Guide for the ITU Hackathon

Michael Mollel (PhD)

May, 14 2026

Share Blog On

A step-by-step guide to building a no-code RAG agent that analyzes finance AI use cases against ITU-T Y.3172 and the ITU AI Readiness Framework using Pawa AI.

Build a no-code RAG-powered agent that analyzes AI use cases against ITU standards — in under 30 minutes.


The International Telecommunication Union (ITU) is running its first AI Readiness Hackathon in Nairobi, Kenya on May 21, 2026. The challenge is clear: build a RAG solution that can analyze AI use cases against the ITU-T Y.3172 machine learning pipeline and assess AI Readiness using the ITU framework, grounded in real policy documents.

This guide walks you through building exactly that — using Pawa AI, with zero coding required.

By the end of this tutorial, you will have a working AI agent that accepts finance-sector AI use cases, analyzes them against international standards, scores readiness across six ITU factors, and cites real East African policy documents as evidence.

Let's build it.


What You Will Build

An AI agent called AI Readiness Analyzer for Finance — East Africa that performs two core tasks:

Task 1 — Y.3172 Pipeline Analysis. When a user submits an AI use case, the agent evaluates it against the five stages of the ITU-T Y.3172 ML pipeline: Data Collection, Data Handling, ML Model Building, ML Model Deployment, and Continuous Monitoring. Each stage gets a status of Addressed, Partial, or Missing.

Task 2 — AI Readiness Assessment. The agent scores the use case against six ITU AI Readiness factors: Availability of Open Data, Access to Research, Deployment Capability and Infrastructure, Standards-enabled Stakeholder Buy-in, Developer Ecosystem and Open Source, and Sandbox and Pilot Experimentation. Each factor is scored High, Medium, or Low, with citations from real policy documents.

The output includes a structured score out of 100 with transparent calculations, actionable recommendations, and document-grounded evidence.

What You Will Need

  • A Pawa AI account (free plan works)
  • 10–15 policy documents in PDF or TXT format (we will tell you exactly which ones)
  • About 30 minutes

Step 1: Log In to Pawa Chat

Go to pawa-ai.com and click Login in the top right corner. If you do not have an account yet, click Free AI Assistant to create one.

Once logged in, you will land on the Pawa Chat interface with your conversations on the left sidebar.

Step 2: Navigate to Agents

On the left sidebar, click Agents. This opens the My Agents page where you can see any agents you have already created, along with a library of agent templates.

Click the Create an Agent button in the top right corner. This opens the Agent Configuration page.

Step 3: Fill in Basic Information

You will see three fields to fill in. Here is exactly what to enter:

Agent Name

AI Readiness Analyzer for Finance — East Africa

Brief Description

A multilingual RAG-powered agent that analyzes AI use cases in the finance domain against the ITU-T Y.3172 ML pipeline standard and assesses AI Readiness using the ITU AI Readiness Framework (v1.0 & v2.0). Grounded in East African national AI policies, fintech regulations, and financial inclusion frameworks from Tanzania, Kenya, Rwanda, and Uganda.

Agent Purpose

This is the most important field. It controls how the agent thinks, responds, and structures its output. We have engineered a comprehensive system prompt specifically for this hackathon. It includes language detection rules, structured output templates in both English and Swahili, citation enforcement, country-specific document search rules, scoring formulas, analytical guardrails, and a self-check mechanism.

Copy and paste the full Agent Purpose prompt from the companion document link. The prompt is approximately 4,000 words and covers every aspect of the agent's behavior.

The key sections of the prompt are:

  • Language Rules — The agent automatically detects whether the user writes in English or Swahili and responds entirely in that language, never mixing the two.
  • Y.3172 Pipeline Analysis — Structured evaluation of five ML pipeline stages with status indicators.
  • AI Readiness Assessment — Scoring against six ITU factors using High (15 points), Medium (10 points), and Low (5 points).
  • Citation Rules — The agent must cite specific documents from the knowledge base by full title for every claim. It is explicitly forbidden from writing generic statements like "no information available" without first searching the knowledge base.
  • Country-Specific Search Rules — The agent knows which documents to look for based on whether the use case targets Tanzania, Kenya, Rwanda, or Uganda.
  • Analytical Guardrails — Rules preventing common mistakes such as confusing private data with open data, treating pilot testing as regulatory sandbox readiness, or using API deployment as evidence of data preprocessing.
  • Score Calculation — A transparent three-step formula that produces a total score out of 100.
  • Self-Check Mechanism — A 19-point checklist the agent runs internally before delivering every response.

Step 4: Set the Personality

Scroll down to the Personality and Behavior section. Click Personality to switch from Default mode.

Select these four traits:

  • Objective — The agent delivers evidence-based assessments, not opinions.
  • Direct — Gaps and issues are stated clearly without hedging.
  • Detailed — The output provides thorough breakdowns across all factors and stages.
  • Pragmatic — Recommendations are practical and actionable.

Do not select Empathetic, Gentle, Poetic, or Concise. This is a standards-compliance analysis tool, not a conversational assistant.

Step 5: Set the Starter Questions

Replace the default starter questions with these three:

Chambua use case hii ya AI katika sekta ya fedha kwa mfumo wa Y.3172

Tathmini AI Readiness ya Tanzania katika fintech

Analyze this finance AI use case for ITU standards alignment

The first two are in Swahili and the third is in English. This immediately demonstrates the bilingual capability of the agent when anyone opens it.

Step 6: Build the Knowledge Base

This is the most critical step. Without documents in the knowledge base, the agent cannot cite real evidence and will produce generic responses.

Click on the knowledge base section and upload your policy documents. You can upload PDF and TXT files, up to 15 documents.

Here are the priority documents to collect and upload, organized by country:

ITU Reference Documents (Required)

some can be found here

  1. ITU AI Readiness Report v1.0 (2024) — This defines the six key readiness factors. Download from the ITU publications page.
  2. ITU-T Y.3172 context materials — The AI for Good article explaining the Y.3172 ML pipeline framework.

Tanzania

  1. Tanzania Digital Economy Strategic Framework 2024–2034 — Available from the ICTC Tanzania website.
  2. Bank of Tanzania Fintech Regulatory Sandbox Regulations 2024 — Available from the Bank of Tanzania publications page.
  3. Tanzania National Financial Inclusion Framework 2023–2028 — Available from the Bank of Tanzania website.
  4. Tanzania AI Ethical Use Guidelines (MICIT 2025) — Available from the Ministry of Communication and Information Technology website.

Kenya

  1. Kenya AI Strategy 2025–2030 — Available from the Ministry of ICT website (ict.go.ke).
  2. Kenya Digital Economy Blueprint — Available from Smart Africa.
  3. CBK Discussion Paper on Central Bank Digital Currency — Available from the Central Bank of Kenya website.

Rwanda

  1. Rwanda National AI Policy 2023 — Available from the Ministry of ICT and Innovation (MINICT) website.

Uganda

  1. Uganda National 4IR Strategy — Executive summary available from the National Information Technology Authority (NITA-U) website.

Regional

  1. AU Digital Transformation Strategy 2020–2030 — Available from the African Union website.

After uploading, wait for the knowledge base to finish indexing before testing.

Step 7: Create the Agent

Click the Create button. Your agent is now live.

Step 8: Test Your Agent

Start a conversation with your agent using one of these test use cases:

Test 1 — Tanzania Fraud Detection (English)

We want to build an AI system that uses mobile money transaction data from M-Pesa and Tigo Pesa to detect fraudulent transactions in Tanzanian banks. The system will use machine learning to analyze user behavior patterns and provide real-time alerts. Analyze this use case using the Y.3172 ML pipeline framework and assess Tanzania's AI Readiness for this project.

Test 2 — Kenya Credit Scoring (English)

We are planning to build an AI credit scoring system for unbanked populations in Kenya. The system will use alternative data from M-Pesa transaction history, KPLC electricity bills, and Safaricom mobile usage data to assess a person's creditworthiness. The goal is to enable microfinance institutions to issue small loans to small entrepreneurs without requiring traditional collateral. Analyze this use case using the Y.3172 framework and assess Kenya's AI Readiness for this project.

Test 3 — Tanzania Agricultural Insurance (Swahili)

Tunapanga kujenga mfumo wa AI wa bima ya kilimo kwa wakulima wadogo nchini Tanzania. Mfumo huu utatumia data ya hali ya hewa kutoka TMA, data ya mazao kutoka Wizara ya Kilimo, na data ya malipo ya M-Pesa na Tigo Pesa kuwezesha bima ya kiotomatiki. Mkulima atajiandikisha kupitia USSD, na mfumo utatumia ML kutabiri hatari ya ukame au mafuriko, na kisha kulipa fidia moja kwa moja kwenye akaunti ya mobile money. Mradi unalenga mikoa ya Dodoma, Singida, na Morogoro. Chambua use case hii kwa mfumo wa Y.3172 na tathmini AI Readiness ya Tanzania.

Test 4 — Rwanda Credit Risk (English)

Rwanda is planning to build an AI credit risk management system for small banks and financial institutions. The system will use transaction data from MTN Mobile Money and Airtel Money, along with FinScope Rwanda data and Rwanda Credit Reference Bureau records to assess loan repayment likelihood. The system will use gradient boosted trees and neural networks, deployed via API into core banking systems with monthly monitoring for model drift and bias. Analyze this use case using the Y.3172 framework and assess Rwanda's AI Readiness for this project.

What to Check in Each Response

When you receive a response, verify the following:

  • Section A appears with all five Y.3172 stages, each with a status indicator
  • Section B appears with all six AI Readiness factors in a table, each with a score and a cited document
  • Section C appears with step-by-step score calculation that adds up correctly
  • The agent cites documents by their full title, not generic statements
  • The agent uses documents from the correct country
  • The entire response is in one language matching your input
  • The total score is different for different use cases (not always the same number)

If citations are missing or the agent produces generic responses, review your knowledge base uploads and ensure the documents have been properly indexed.

Step 9: Iterate and Improve

Based on your test results, you may need to:

  • Add more documents to the knowledge base if citations are weak for a specific country.
  • Adjust the Agent Purpose if the output structure does not match expectations.
  • Test in both languages to confirm the language detection works correctly.
  • Try edge cases such as non-finance use cases (the agent should politely decline) or use cases with very detailed descriptions (the score should be higher).

Preparing for the Hackathon Showcase

On May 21 at KICC Nairobi, you will demo your agent live. Here is how to prepare:

Prepare a 3–5 minute demo script. Start with a brief introduction of your agent and the problem it solves. Then run one use case live, walking judges through Section A, Section B, and Section C of the output. Highlight the document citations as evidence of RAG retrieval. End with the readiness score and recommendations.

Prepare your knowledge base contribution. The hackathon requires you to submit a curated list of all policy documents you used, with reference links. Format this as a clean table with document title, source URL, country, and domain tags.

Know the evaluation criteria. Judges score on four dimensions: clear description of the use case with respect to Y.3172, clear mapping to AI Readiness factors, contribution to the knowledge base, and input to AI strategies and policies.

Why Pawa AI for This Hackathon

This entire agent was built without writing a single line of code. The agent configuration, knowledge base, structured output, bilingual support, citation enforcement, and scoring system are all achieved through the Pawa AI no-code agent builder.

This matters because the hackathon is about AI Readiness — and demonstrating that African-built, accessible tools can deliver standards-compliant analysis is itself a powerful statement about the readiness of the East African AI ecosystem.

The agent was built by PAWA-AI, an African AI and software engineering company, using Pawa models, a no-code AI platform. Both the tool and the builder are African — proving that the infrastructure for AI deployment in the region already exists.

Resources

  • Full Agent Purpose Prompt — Available in the companion document. Copy and paste the entire text into the Agent Purpose field.
  • Document Collection Guide — A complete list of 46 policy documents with download links, organized by country and priority order.
  • ITU AI Readiness Hackathon Pageaiforgood.itu.int
  • Pawa AIpawa-ai.com
  • AgentAI Readiness Analyzer for Finance

Built with Pawa AI

Get product announcements, customer stories, and best practices of AI right into your inbox.

Be Part of The AI Evolution We All Deserve