This project explores how the public understands, uses, and thinks about artificial intelligence, with the aim of improving AI education and public engagement.
Through a survey of the general public, the project examines attitudes towards current AI technologies, everyday usage patterns, and perceived gaps in existing approaches to AI education. It also captures public views on what effective AI education should look like in the future.
Insights from this research are used to inform the development of accessible educational resources that respond directly to public needs. These resources are designed to support engagement with a wide range of audiences, including citizens, educators, policymakers, and industry stakeholders, and to sit alongside related initiatives within the wider programme.
The project contributes to a broader effort to strengthen AI literacy and enable more informed public discussion around the role of artificial intelligence in society.
The UK Government has committed itself to an ambitious national AI strategy. It rightly acknowledges the potential benefits of AI to citizens and to society as a whole. In order to support this transition, FractalAI is working with partner universities on a project to build AI literacy amongst the UK public.
For the UK to remain competitive in the AI space, we have to treat AI literacy like standard literacy. Our work investigates how parents can influence their children’s development at the earliest stages of life: by providing parents with the resources they need in order to teach their children AI literacy, we are helping to build an AI-literate citizen body of both children and adults. We are committed to improving these resources across time through active engagement with parents and academics, all with the aim of supporting the UK’s AI transition.
The outcome of this work will be a published paper and a resource website on FractalAI.org.uk containing tailored resources and links.
Fractal AI is committed to enabling citizen literacy in the digital age.
As an output of our citizen literacy project we are curating a set of resources below for those looking to get the basics in AI. Whether students, parents or young people, everyone can find a resource to suit. If you have any requests please email us and we will put tip an appropriate resource.
We will shortly have a set of free online courses to complement this offering.
We have regular webinars which you can register for from our homepage.
This resource bank has been compiled by Maryam Alka as part of the Citizen Literacy Project, following discussions with Murray Webster and in line with the project requirements.
These are listed first in line with the direction to prioritise UK and Irish government and public-sector material
No. | Resource | Audience | Description | Why it is suitable | Link |
1 | Generative artificial intelligence (AI) in education | Schools, colleges, parents, policy audiences | Department for Education policy paper on how generative AI may be used in education, with legal, safeguarding, copyright and assessment considerations. | Strong starting point because it is the core current England policy document and explicitly notes schools may need guidance for students, pupils and parents. | |
2 | Using technology in education | School leaders, teachers | GOV.UK collection bringing together technology and AI guidance for education settings. | Useful hub page because it lets website users browse official guidance without needing to know document titles in advance. | https://www.gov.uk/government/collections/using-technology-in-education |
3 | Using AI in education settings: support materials | School and college staff | DfE support-materials collection for schools and colleges using AI safely and effectively. | Suitable because it translates high-level policy into practical implementation materials. | https://www.gov.uk/government/collections/using-ai-in-education-settings-support-materials |
4 | Interacting with generative AI in education: module 2 | School and college staff | DfE training module on practical interaction with generative AI in education settings. | Helpful for a website because it gives usable professional-development content, not only policy statements. | https://www.gov.uk/government/publications/interacting-with-generative-ai-in-education-module-2 |
5 | Safe use of generative AI in education: module 3 | School and college staff | DfE module focused on risk, mitigation, data protection and safe deployment of AI tools. | Important because safety and responsible use are central concerns for parents and educators. | https://www.gov.uk/government/publications/safe-use-of-generative-ai-in-education-module-3 |
6 | AI in schools and colleges: what you need to know | Parents, teachers, general public | Plain-language government explainer about what AI use in schools looks like and what rules schools can set. | Suitable for a public-facing resource page because it is easier to read than policy papers. | |
7 | A Safe, Informed Digital Nation | Parents, educators, youth workers, policy audiences | UK government media-literacy policy paper covering children, online safety and the impact of new technologies including AI. | Directly relevant to citizen literacy because it links AI change with media literacy and safer digital participation. | |
8 | Growing up in the online world: a national conversation | Parents, carers, teachers, youth-sector audiences | 2026 UK consultation on children’s digital lives, online experiences and support for families. | Useful because it shows the current policy conversation around digital childhood, family support and online harms. | |
9 | Draft Statement of Strategic Priorities for Online Safety | Policy audiences, educators, general public | UK government statement setting priorities around online safety, resilience and informed participation online. | Suitable because citizen AI literacy overlaps strongly with misinformation, resilience and online-harms literacy. | |
10 | Digital Inclusion Action Plan: First Steps | General public, community educators, policy audiences | UK digital-inclusion plan stating that essential participation skills increasingly include AI, media and data literacy. | Important because it explicitly frames AI literacy as part of broad civic participation, not just specialist computing. | |
11 | Principles of AI use in marking | Assessment leads, teachers, policy audiences | Ofqual working paper exploring principles and implications of AI in high-stakes marking. | Useful for the website because public concern often focuses on fairness, transparency and accountability in educational AI. | |
12 | Generative artificial intelligence in education – Hwb | Schools and settings in Wales | Welsh government/Hwb guidance on opportunities, risks and compliance issues in AI use for schools. | Important devolved-nation resource so the project does not look England-only. | |
13 | Generative AI: keeping learners safe online – Hwb | Schools, safeguarding leads, parents | Wales-focused safeguarding guidance on generative AI and learner safety online. | Strong fit with parental concerns about online harms and safe use. | https://hwb.gov.wales/keeping-safe-online/generative-ai/generative-ai-keeping-learners-safe-online/ |
14 | Artificial intelligence: practical recommendations for education settings in Wales | Education settings in Wales | Practical recommendations adapted for schools on safe, ethical and purposeful AI use. | Useful because it converts general principles into concrete school actions. | |
15 | New generative AI policy template for schools – Hwb article | School leaders and governors | Welsh update pointing schools to a policy template for generative AI. | Suitable because many users need templates they can adopt, not just concept papers. | https://hwb.gov.wales/news/articles/47638187-de18-4d4f-be19-6c99ec7837a3 |
16 | Digital in education – Hwb | Teachers and school leaders in Wales | Hwb digital-education area covering AI in schools and wider digital-learning resources. | Useful as a broad Welsh hub page that can anchor several linked resources. | https://hwb.gov.wales/school-improvement-and-leadership/digital-in-education/ |
17 | AI and Education Advice Paper – February 2025 | Irish education and policy audiences | Irish AI Advisory Council paper focused on AI and education, including access, language and implementation concerns. | Key Irish policy-facing resource and directly relevant to Murray’s request to start with UK and Irish government materials. | |
18 | Minister welcomes new national guidance on the use of AI in schools | Irish schools, parents, public audiences | Irish Department of Education announcement on national guidance for AI in schools. | Useful concise entry point for users who want the policy update before reading longer documents. | |
19 | Digital Strategy for Schools to 2027 | Irish schools, educators, policy audiences | Ireland’s digital strategy for schools, signposting AI in Schools Hub, Scoilnet and digital-learning supports. | Important because it situates AI education within a broader national digital-learning strategy. | https://www.gov.ie/en/department-of-education/publications/digital-strategy-for-schools-to-2027/ |
20 | Guidance on Artificial Intelligence in Schools | Irish teachers and school leaders | Oide guidance page signposting national guidance and implementation resources for schools. | Suitable because Oide is a practical support body used directly by schools. | |
21 | AI for Schools – New online course | Irish teachers | Oide course introducing AI, its benefits, risks and ethics in school contexts. | Good practical professional-learning item for teachers who want a structured starting point. | https://oide.ie/digital-tech-news/ai-for-schools-new-online-course/ |
22 | Artificial Intelligence – 5 Considerations for Teacher Use | Teachers | Irish school resource prompting teachers to check evidence of impact, accuracy, bias, data use and school policy fit before using AI. | Highly suitable because it is concise, practical and easy to turn into a website summary card. | https://oide.ie/wp-content/uploads/2025/01/AI_5-Considerations-for-Teacher-Use_EV.pdf |
This section has been checked against the uploaded draft article so that the resource bank reflects the manuscript faithfully. Where a claim depends on a source that is currently easiest to access as a preprint, repository record or online-first version, that is stated in the table rather than hidden.
No. | Cited source | Original claim in draft | Why the match is accurate | Link / version checked |
23 | What Is AI Literacy? Competencies and Design Considerations (Long & Magerko, 2020) | Used in the draft to define AI literacy as a competency set and to justify educational design principles. | Matches the draft exactly: the paper defines AI literacy competencies and includes design considerations, including gradual disclosure and developmental fit. | |
24 | Family as a Third Space for AI Literacies (Druga et al., 2022) | Used in the draft to support parent-child AI learning and the claim that prior parental AI expertise is not required for meaningful joint learning. | Matches the manuscript claim about family learning space and parent roles in children’s AI literacy. | |
25 | How New Technology Influences Parent-child Interaction (Korat & Or, 2010) | Used in the draft to support the argument that educational e-books can stimulate productive parent-child interaction. | Matches the draft claim that educational e-books elicited more discussion than commercial e-books and informs the argument for carefully designed educational resources. | |
26 | Artificial Intelligence (AI) in early childhood education: Curriculum design and future directions (Su & Zhong, 2022) | Used in the draft to support the three-competency framework for early-childhood AI curriculum: knowledge, skill and attitude. | Matches the draft’s summary of the curriculum-design paper and its recommendation structure. | |
27 | Artificial Intelligence (AI) Literacy in Early Childhood Education: The Challenges and Opportunities (Su et al., 2023) | Used in the draft to support claims about teacher confidence, curriculum-design challenges and the need for clearer guidance in AI literacy. | Matches the draft wording about lack of AI-literate teaching staff, curriculum design challenges and a lack of teaching guidelines. | https://www.sciencedirect.com/science/article/pii/S2666920X23000036 |
28 | Artificial intelligence education for young children (Yang, 2022) | Used in the draft to frame AI literacy around digital equity, age-appropriateness and relevance to children’s real-world experience. | Matches the manuscript’s summary of Yang’s argument about why, what and how to teach AI to young children. | https://www.sciencedirect.com/science/article/pii/S2666920X22000169 |
29 | Exploring Parent-Child Perceptions on Safety in Generative AI (Yu et al., accessible preprint) | Used in the draft to support claims that parents often lack awareness of children’s AI uses and that parents and children perceive risks differently. | The accessible preprint aligns closely with the manuscript’s summary. The draft labels this citation as Yu et al. (2025); the currently accessible public version found online is a 2024 preprint. | |
30 | Introducing AI as a subject in Swedish education (Velander et al., record) | Used in the draft to support the example of Sweden introducing AI as a discrete upper-secondary subject and moving beyond narrow functional literacy. | Matches the draft’s Sweden example, although the accessible web record should be checked against the exact year/version cited in the manuscript. | https://umu.diva-portal.org/smash/record.jsf?pid=diva2%3A1893857 |
31 | Analytical modelling and UK Government policy (Oldfield, 2022) | Used in the draft to support the claim that subject benchmark statements and AI-related curriculum content have not kept pace with current needs. | Matches the manuscript’s argument about gaps in benchmark statements, modelling and ethics education. | https://link.springer.com/article/10.1007/s43681-021-00078-9 |
32 | Technical challenges and perception: does AI have a PR issue? (Oldfield, 2024 online-first / 2023 citation trail) | Cited in the draft materials and relevant to claims about public understanding, ethics and the need for better communication around AI. | This is a directly cited article in the uploaded draft package and aligns with the manuscript’s broader concern about public perception and communication. | https://link.springer.com/article/10.1007/s43681-023-00316-2 |
33 | A Comprehensive Review of AI Myths and Misconceptions | Used in the draft to support the claim that the public still holds misconceptions about what AI is and how it works. | Matches the draft’s use of the source as a misconceptions review, while noting it is presented online as a preprint. | |
34 | The future of AI and education: Some cautionary notes | Used in the draft to support the point that future AI education must foreground limitations, harms and caution rather than simple optimism. | Matches the draft’s cautionary framing around harms and limitations in educational AI. | |
35 | K-12 AI Guidelines: What Teachers Need to Know | Used in the draft to support the claim that educators face time and implementation pressures when trying to add AI education. | Matches the draft’s point about practical barriers for teachers adopting AI-related teaching. | https://ojs.aaai.org/index.php/aimagazine/article/download/9857/9716 |
36 | Ethics and Information Technology article on learning objectives for K-12 AI education | Used in the draft to support the claim that there is a lack of proper resources and concrete learning objectives in AI education. | Matches the manuscript’s use of the source for learning-objective and curriculum-design concerns. | https://link.springer.com/content/pdf/10.1007/s10676-023-09733-7.pdf |
37 | Funding challenges mean UK risks falling behind AI education | Used in the draft to support the claim that there is little consistency and standardisation in tertiary AI education offerings. | Matches the draft’s use of this source as a current-commentary indicator of uneven university provision. | |
38 | Generation Ready: Scaling safe, high-quality AI in England’s schools | Used in the draft to support the claim that there is a divide between state and private schools in AI understanding and adoption. | Matches the draft’s explicit use of the Tony Blair Institute report for educational inequality around AI adoption. | |
39 | AI literacy has been defined in a broad range of ways (systematic review source cited in draft) | Used in the draft to support the claim that AI literacy is defined in multiple different ways across the literature. | Matches the draft’s statement introducing several alternative definitions and scales of AI literacy. | |
40 | Submission Guidelines / methodology note cited in draft package | Included because it appears in the uploaded draft package as part of the article preparation materials. | This is not a content source for the website itself, but it is genuinely cited in the uploaded draft package and is preserved here for completeness. |
These entries directly address family confidence, home support, safe use, and the need for trustworthy guidance rather than random search results.
No. | Resource | Audience | Description | Why it is suitable | Link |
41 | Internet Matters Parent Guide to AI Tools | Parents; carers | Interactive guide explaining AI tools, benefits, risks, and an AI dictionary. | Suitable because it translates AI jargon into everyday language for home use. | https://www.internetmatters.org/resources/parent-guide-to-artificial-intelligence-ai-tools/ |
42 | Internet Matters AI Safety Advice Hub | Parents; carers | Hub with practical tips for helping children use AI safely and appropriately. | Useful as a general starting point for families who want actionable advice rather than theory. | https://www.internetmatters.org/advice/by-activity/using-artificial-intelligence/ |
43 | What Is AI and Is It Safe for Children? | Parents; carers | Explains the main types of AI children might encounter and the main risks. | Suitable because many parents need a basic orientation before moving to specific tools. | |
44 | How to Use AI Safely with Children | Parents; carers | Practical safety steps for mainstream AI tools that were not designed specifically for children. | Strong fit because it turns concern into manageable home actions. | |
45 | Using Artificial Intelligence Safely | Parents; carers; professionals | Expert advice on AI and children’s digital wellbeing. | Useful because it frames AI literacy as wellbeing and safety, not just technical skill. | https://www.internetmatters.org/tech-and-kids-digital-futures/using-artificial-intelligence-safely/ |
46 | How to Decide if an AI Tool Is Safe for Your Child | Parents; carers | Checklist-style guidance on judging whether an AI tool is appropriate and helpful. | Suitable because families often need evaluation criteria rather than brand-specific advice. | |
47 | AI Chatbots and Virtual Friends | Parents; carers | Guide to companion chatbots, emotional reliance, and safe family responses. | Important because relationship-style AI tools create risks beyond homework and productivity. | |
48 | Talking About AI – A Guide for Families | Parents; carers; schools | PDF with age-appropriate conversation starters and activities for home discussions. | Highly suitable for a website resource list because it is concrete, downloadable, and ready to use. | |
49 | Me, Myself and AI – Chatbot Research | Parents; policymakers; schools | Research summary on children, parents, and chatbot use. | Useful because evidence-based resources strengthen the credibility of the project website. | https://www.internetmatters.org/hub/research/me-myself-and-ai-chatbot-research/ |
50 | Generative AI in Education: Kids and Parents’ Views | Parents; educators; policymakers | Research report on how children use generative AI for schoolwork and how families view it. | Suitable because it connects family concerns directly to school contexts. | https://www.internetmatters.org/hub/research/generative-ai-in-education-report/ |
51 | Childnet Guide to ChatGPT for Parents and Carers | Parents; carers | Accessible explainer of what ChatGPT is and what adults should watch for. | Useful as a tool-specific page because ChatGPT is one of the best-known AI products. | https://www.childnet.com/blog/what-do-i-need-to-know-about-chatgpt-a-guide-for-parents-and-carers/ |
52 | Childnet on Social Media Algorithms | Parents; carers; young people | Plain-language guide to how algorithms shape feeds and recommendations. | Suitable because AI literacy should include recommendation systems, not only chatbots. | |
53 | Childnet on Snapchat’s My AI | Parents; carers; teenagers | Guide to one of the AI tools young people may encounter inside familiar apps. | Useful because it brings AI literacy into platforms children already use. | https://www.childnet.com/blog/snapchats-new-ai-chatbot-and-its-impact-on-young-people/ |
54 | Be Internet Awesome Pledge | Families | Family-facing pledge and digital citizenship prompts for shared expectations at home. | Suitable because families often need simple shared rules before they need detailed theory. | |
55 | Common Sense/Day of AI Family Toolkit News Page | Families; educators | Announcement page describing a toolkit for talking to children about AI, privacy, fairness, and responsibility. | Useful as a signpost to family-facing AI literacy materials built specifically for parents. |
These resources support classroom practice, school implementation, curriculum design and professional development.
No. | Resource | Audience | Description | Why it is suitable | Link |
56 | Code.org AI Education Hub | Teachers; students; schools | Central landing page for Code.org’s AI curriculum, tools, professional learning, and videos. | A strong core resource because it is free, structured, and easy for schools to adopt. | |
57 | How AI Makes Decisions | Grades 3-5; teachers | Elementary unit on prediction, categorisation, and simple data models. | Suitable for younger learners because it introduces decision-making in plain language and hands-on formats. | https://studio.code.org/courses/k5-ai-data-2024/units/1?viewAs=Instructor |
58 | AI for Oceans | Grades 3-8; teachers; clubs | Interactive activity where learners train a model to distinguish sea creatures from trash. | Useful because it combines environmental relevance with an accessible first model-training experience. | https://studio.code.org/courses/oceans/units/1?viewAs=Instructor |
59 | Dance Party: AI Edition | Grades 3-8; teachers | Creative activity that introduces AI concepts through music and movement. | Suitable for engagement because it lowers intimidation and reaches learners who enjoy expressive activities. | https://studio.code.org/courses/dance-ai-2023/units/1?viewAs=Instructor |
60 | Exploring Generative AI | Grades 8-12; teachers | Curriculum on generative AI, projects, and ethical use. | A strong fit because generative AI is the tool set many families and pupils are currently encountering. | https://studio.code.org/courses/exploring-gen-ai-2025?viewAs=Instructor |
61 | Generative AI for Humanities | Grades 6-12; humanities teachers | Two lessons on writing and research with AI in humanities contexts. | Useful because AI literacy should not sit only in computing departments. | https://studio.code.org/courses/gen-ai-humanities/units/1?viewAs=Instructor |
62 | Coding with AI | Grades 7-12; computing teachers | Unit on using LLMs for coding while retaining ethics and human judgement. | Suitable for responsible use conversations where efficiency and deep learning need to be balanced. | https://studio.code.org/courses/coding-with-ai/units/1?viewAs=Instructor |
63 | Societal Impact of Generative AI | Grades 7-12; PSHE/citizenship | Lesson on stakeholder perspectives and social impact of generative AI. | Useful for civic reasoning and discussion-based literacy rather than purely technical understanding. | https://studio.code.org/courses/ai-ethics-2023/units/1/lessons/2 |
64 | Computer Vision | Grades 6-12; teachers | Unit on how AI interprets image and video data. | Suitable because vision-based AI is highly visible in daily life, from phones to surveillance. | https://studio.code.org/courses/computer-vision/units/1?viewAs=Instructor |
65 | How AI Works | Grades 6-12; teachers | Lesson set aligned to short explainer videos on key AI concepts. | Useful for blended learning, homework, and flipped-classroom delivery. | https://studio.code.org/courses/how-ai-works-2023/units/1?viewAs=Instructor |
66 | AI and Machine Learning | Grades 6-12; teachers | Longer unit on how computers learn from data and how students can build relevant projects. | Suitable for learners ready to move from awareness to deeper applied understanding. | https://studio.code.org/courses/aiml-2025/units/1?viewAs=Instructor |
67 | Our AI Code of Ethics / AI Ethics Unit | Grades 7-12; teachers | Curriculum on beneficial and harmful impacts of AI and how communities can set ethical expectations. | Good fit for the project’s emphasis on safe, thoughtful, values-led AI use. | https://studio.code.org/courses/ai-ethics-2023/units/1?viewAs=Instructor |
68 | AI 101 for Teachers | Teachers; CPD leads | Free foundational series introducing AI, bias, responsible use, and classroom applications. | Suitable because many adults need confidence-building before they can support children. | https://code.org/en-US/professional-learning/artificial-intelligence-101 |
69 | Self-Paced Professional Learning Catalog | Teachers | Catalog of self-paced professional learning options, including AI-related courses. | Useful for flexible staff development where time and budgets are limited. | |
70 | AI Teaching Assistant | Teachers | AI-supported tool for navigation, lesson support, and teacher workload reduction. | Suitable as a practical example of how AI can assist educators without replacing judgement. | https://code.org/en-US/artificial-intelligence/teaching-assistant |
These entries were selected because they make AI concepts more concrete, age-appropriate and discussion-based.
No. | Resource | Audience | Description | Why it is suitable | Link |
71 | AI Foundations for Early Childhood | Ages 5-7; teachers; families | Play-based introduction to what AI is, how machines sense the world, and how human and machine intelligence differ. | Ideal for early years because it is age-appropriate, low-tech, and helps adults start AI conversations without assuming prior knowledge. | https://dayofai.org/units/ai-foundations-grades-k-2-ages-5-7 |
72 | How We Teach Machines | Ages 5-7; KS1 teachers | Unplugged unit on data, patterns, predictions, datasets, and algorithms for very young learners. | Suitable because it avoids heavy coding and makes abstract AI ideas concrete through sorting and classroom routines. | |
73 | What Is Artificial Intelligence? | Ages 8-18; teachers | Foundational lessons covering AI examples, machine learning, algorithms, bias, and the Five Big Ideas of AI. | Strong anchor resource for the project because it gives a clear starting point for children, parents, and educators. | |
74 | How Do Machines Learn? | Ages 8-18; teachers | Hands-on unit using classroom activities and simple model training to explain supervised learning and bias. | Useful for demystifying AI and showing that model performance depends on data quality and human choices. | |
75 | AI Literacy in 15 Minutes or Less | Ages 8-18; tutors; libraries | Short, no-prep activities for quick AI discussions, scavenger hunts, and classroom warm-ups. | Suitable for busy schools and community settings that need light-touch resources rather than full schemes of work. | |
76 | Ethical Use of AI Exploration | Ages 11-18; teachers | Single-lesson exploration of privacy, misinformation, fairness, plagiarism, and class-generated AI rules. | Fits the literacy project well because it helps learners move from awareness to responsible decision-making. | |
77 | AI Ethics Debate | Ages 11-18; secondary teachers | Structured debate activity on bias, privacy, facial recognition, and stakeholder perspectives. | Useful for developing critical thinking, discussion skills, and balanced judgement rather than passive tool use. | |
78 | AI, Effort, and Your Brain | Ages 11-18; teachers | Lesson on cognitive effort, cognitive debt, and when AI helps or weakens learning. | Highly relevant for schools because it tackles over-reliance on AI and supports healthy study habits. | |
79 | Work in the Age of AI | Ages 11-18; careers leads | Lessons on how AI changes jobs, tasks, responsibility, and human skills across sectors. | Suitable because citizen literacy should include future work, employability, and ethical use in real contexts. | |
80 | AI and the Creative Arts | Ages 8-18; arts and humanities | Explores AI-generated art, creativity, copyright, attribution, identity, and bias. | Good for cross-curricular delivery because it connects AI to subjects beyond computing and invites reflection. | |
81 | Making Sense of Our Surroundings | Ages 8-13; primary/KS2-3 | Data and AI literacy unit linked to weather, observation, and environmental patterns. | Suitable because it links AI to everyday evidence gathering and supports numeracy and science integration. | |
82 | Telling Climate Stories with Data | Ages 11-18; teachers | Project-based unit using real climate data to build stories, interpretation, and communication skills. | Useful because it teaches data literacy alongside AI literacy and shows how evidence supports public understanding. | |
83 | The Impact of AI on the Environment | Ages 11-18; eco-clubs; secondary | Two-part unit on AI’s carbon footprint, sustainability, and responsible technology use. | Suitable because it broadens AI literacy beyond tools to include environmental consequences. | https://dayofai.org/units/the-impact-of-ai-on-the-environment |
84 | How Are We Quantified by AI? | Ages 13-18; secondary teachers | Multi-lesson unit on data collection, identity, consent, visualisation, and data activism. | Strong fit for civic literacy because it shows how AI systems classify people and why representation matters. | |
85 | Generative AI with MIT App Inventor | Ages 14-18; computing teachers | Builds simple chatbot and image-generator apps while discussing prompts, limitations, and ethics. | Suitable for older learners who need applied, creation-based experiences rather than theory alone. | https://dayofai.org/units/generative-ai-with-mit-app-inventor |
These resources support broader public understanding, online safety, media literacy and low-barrier experimentation.
No. | Resource | Audience | Description | Why it is suitable | Link |
86 | Be Internet Awesome Educators Hub | Teachers; schools | Main hub for Google digital safety and AI literacy classroom resources. | Suitable because online safety and AI literacy overlap strongly for children and families. | |
87 | Be Internet Awesome AI Literacy Guide | Grades 2-8; teachers | Supplementary AI literacy curriculum on machine learning, data, and algorithms. | Useful because it is short, accessible, and built for the age range many parents ask about. | |
88 | Be Internet Awesome Curriculum | Teachers; primary schools | Free digital citizenship curriculum covering five core online safety themes. | Suitable because AI literacy should sit alongside broader digital citizenship, not apart from it. | |
89 | Interland | Children; families | Game-based online safety experience that makes core concepts memorable through play. | Useful for engaging younger learners who respond well to interactive practice. | |
90 | Be Internet Awesome Wellbeing Packet | Teachers; pastoral staff | Downloadable guide to digital wellbeing in classroom settings. | Suitable because healthy technology habits support responsible AI use. | |
91 | Be Internet Awesome Media Literacy Packet | Teachers; librarians | Foundational tools and concepts for evaluating media environments. | Useful because media literacy and AI literacy both depend on questioning and verification. | |
92 | Be Internet Awesome Printable Activities | Primary teachers; families | Offline classroom activities aligned to the Be Internet Awesome themes. | Suitable for schools and community groups that want low-tech delivery. | |
93 | Be Internet Awesome Internet Awesome Tips | Children; families | Quick, memorable tips for safe, smart, and positive online behaviour. | Useful for website users who prefer short, skimmable guidance. | |
94 | Teachable Machine | Students; teachers; families | Browser-based tool for training simple image, sound, and pose models without coding. | Suitable because it makes machine learning tangible in minutes. | |
95 | Quick, Draw! | Children; families; teachers | Game where a neural network tries to guess doodles as the user draws. | Useful as a playful introduction to pattern recognition and machine learning. | |
96 | Quick, Draw! Data | Older students; teachers | Open drawing dataset showing how human-generated examples can support ML research. | Suitable for teaching that datasets are built from human inputs and can be inspected. | |
97 | MIT App Inventor | Teachers; older students | Block-based environment for building apps, including AI-related projects. | Useful because it supports active creation and computational thinking. | |
98 | MIT App Inventor Platform Overview | Teachers; facilitators | Accessible overview of App Inventor from MIT RAISE. | Suitable for educators who need a plain-language introduction before adopting the tool. | |
99 | Common Sense and Day of AI: Using AI Wisely for School Success | Families; educators | Toolkit announcement focused on school success, learning habits, and wise AI use. | Useful because it directly addresses AI use in homework and study contexts. | |
100 | Common Sense Media Research: Families’ Attitudes on AI | Parents; policymakers; educators | Recent research on how children and parents view AI, safety, and guardrails. | Suitable because the website should include evidence on both needs and attitudes. | https://www.commonsensemedia.org/research/a-new-look-at-families-attitudes-on-ai |
These resources were suggested by Dr Marie
No. | Resource | Audience | Description | Why it is suitable | Link |
101 | QAA Advice and Resources on Generative AI (including AI and academic integrity) | Students; teachers; HE staff; policymakers | QAA hub page collating guidance on generative AI in higher education, including advice on maintaining academic integrity, reconsidering assessment in the AI era, and supporting students to use AI ethically without breaching plagiarism policies. | Directly addresses AI and plagiarism for students and is produced by the UK’s quality assurance body for higher education, making it an authoritative source for the project website. | https://www.qaa.ac.uk/sector-resources/generative-artificial-intelligence/qaa-advice-and-resources |
102 | UNICRI AI Literacy for Children: Skills for a Changing World | Parents; educators; policymakers | UNICRI topic hub on AI literacy for children, including an AI Literacy Guide for Parents and the published article on parents’ perceptions of generative AI use by adolescents. Developed with international partners and grounded in research. | Suitable because it directly addresses parental awareness and children’s AI literacy from an international research perspective, closely aligned with the project’s focus on citizen literacy and family support. | https://unicri.org/topics/AI-Literacy-for-Children-skills-changing-world |
103 | UNICRI/INTERPOL Toolkit for Responsible AI Innovation in Law Enforcement (PDF) | Policy audiences; educators; general public | UNICRI/INTERPOL toolkit on responsible AI adoption in law enforcement, with a Workbook (March 2025) designed as a lifecycle companion for agencies introducing AI ethically. Note: the primary site (ai-lawenforcement.org) has a lapsed SSL certificate; the UNICRI PDF link is the stable access point. | Useful for broadening the resource bank beyond education into public-sector AI governance, showing citizens and educators how responsible AI adoption is approached across institutions. | https://unicri.org/topics/Toolkit-Responsible-AI-for-Law-Enforcement-INTERPOL%20and%20UNICRI |
104 | FAQ on the Use of AI in Research – SPRITE+ Hub | Postgraduate researchers; academics; educators | FAQ developed by an interdisciplinary team of ESRC Doctoral Training Partnership academics, covering ethical and practical questions about using AI in research, including authorship, integrity, and responsible use. | Suitable because it addresses AI literacy in academic and research contexts, bridging the gap between general public guidance and specialist advice for students and researchers using AI professionally. |