AI4Gov https://ai4gov-project.eu Trusted AI for Transparent Public Governance fostering Democratic Values Wed, 13 Mar 2024 12:51:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://i0.wp.com/ai4gov-project.eu/wp-content/uploads/2023/03/ai4gov-full-favicon-v2.png?fit=32%2C32&ssl=1 AI4Gov https://ai4gov-project.eu 32 32 218219671 Recordings from the AI, Big Data And Democracy – AI UK Fringe Event https://ai4gov-project.eu/2024/03/11/recordings-from-the-ai-big-data-and-democracy-ai-uk-fringe-event/?utm_source=rss&utm_medium=rss&utm_campaign=recordings-from-the-ai-big-data-and-democracy-ai-uk-fringe-event Mon, 11 Mar 2024 09:06:07 +0000 https://ai4gov-project.eu/?p=879 The recordings of the deep-dive workshop organized by the AI, Big Data, and Democracy taskforce (AI4Gov, KT4D, Ithaca and ORBIS projects) as part of the AI UK Fringe event on March 7th, are now available οn YouΤube: ΑΙ4Gov was represented by:

The post Recordings from the AI, Big Data And Democracy – AI UK Fringe Event first appeared on AI4Gov.

]]>
The recordings of the deep-dive workshop organized by the AI, Big Data, and Democracy taskforce (AI4Gov, KT4D, Ithaca and ORBIS projects) as part of the AI UK Fringe event on March 7th, are now available οn YouΤube:

ΑΙ4Gov was represented by:

  • Fabiana Fournier from IBM Research, with the keynote speech: “Trustworthy and Explainable AI in Policy Making: How can we enhance trustworthiness, fairness and explainability of the modern AI by enabling humans to reason about the outcomes of AI-based models.”
  • Andreas Karabetian, from University of Piraeus, with the live demonstration: “A Self-Explained Visualization Workbench: a Bias Detection Toolkit that empowers policymakers, researchers, and practitioners with understanding tailored for detecting biases in AI systems.”

The post Recordings from the AI, Big Data And Democracy – AI UK Fringe Event first appeared on AI4Gov.

]]>
879
Unleashing the potential of AI: EU AI Act and its policy implications https://ai4gov-project.eu/2024/03/11/unleashing-the-potential-of-ai-eu-ai-act-and-its-policy-implications/?utm_source=rss&utm_medium=rss&utm_campaign=unleashing-the-potential-of-ai-eu-ai-act-and-its-policy-implications https://ai4gov-project.eu/2024/03/11/unleashing-the-potential-of-ai-eu-ai-act-and-its-policy-implications/#respond Mon, 11 Mar 2024 08:59:32 +0000 https://ai4gov-project.eu/?p=876 As part of its digital strategy, in April 2021, the European Commission proposed the first regulatory framework for artificial intelligence (AI). Following extensive deliberations, on 9th December 2023, the European Parliament and the Council reached a provisional agreement regarding the first EU regulation on artificial intelligence, officially known as EU AI Act. However, the finalized […]

The post Unleashing the potential of AI: EU AI Act and its policy implications first appeared on AI4Gov.

]]>
As part of its digital strategy, in April 2021, the European Commission proposed the first regulatory framework for artificial intelligence (AI). Following extensive deliberations, on 9th December 2023, the European Parliament and the Council reached a provisional agreement regarding the first EU regulation on artificial intelligence, officially known as EU AI Act. However, the finalized text awaits formal adoption to attain legal status within the EU. Designed to enhance the environment for AI technology’s advancement and application, this comprehensive legislation aims to set the stage for responsible AI development, deployment, and governance, signaling a paradigm shift in the way policymakers approach this transformative technology.

In the dynamic landscape of governance and societal progress, the integration of AI has emerged as a powerful tool to drive evidence-based innovations, policies, and policy recommendations. By harnessing the public sphere, political power, and economic influence, this novel technology is paving the way for a more democratic and inclusive future:

  • In the realm of evidence-based innovations, AI plays a pivotal role in sifting through vast amounts of data to extract meaningful insights. Policymakers can leverage AI-driven analytics to identify patterns, trends, and correlations within datasets, empowering them to make informed decisions. This data-driven approach ensures that policies are not only well-informed but also adaptable to the ever-changing needs of society.
  • AI contributes to the democratization of information and access to the public sphere. Through advanced natural language processing and sentiment analysis, AI systems can gauge public opinions, concerns, and sentiments across various platforms. This real-time feedback loop enables policymakers to engage with citizens on a more personal level, fostering a sense of inclusivity and responsiveness in the democratic process.
  • Political power, when wielded responsibly, can be a force for positive change. AI facilitates the development of policies that are not only responsive to public needs but also reflective of diverse perspectives. By analyzing demographic data and understanding the unique challenges faced by different communities, policymakers can tailor their strategies to ensure equitable distribution of resources and opportunities.
  • Economic power, often a driving force in shaping societal structures, can be harnessed through AI to promote inclusive growth. AI-driven economic analyses can identify areas of potential development, allocate resources efficiently, and mitigate disparities. This data-driven economic approach ensures that policies are not only effective in theory but also have tangible, positive impacts on communities at all levels.
  • Policy recommendations, when grounded in robust data and analysis, gain credibility and effectiveness. AI acts as a facilitator in generating well-informed policy recommendations by identifying the most viable and impactful strategies. This ensures that policymakers are equipped with the tools to address complex challenges, from economic inequalities to environmental sustainability, with precision and foresight.

However, the integration of AI in democratic processes comes with its own set of challenges, as great power inherently carries great responsibility. Striking a balance between innovation and regulation is crucial to ensuring that AI serves the democratic ideals of fairness, justice, and equality. Therefore, policymakers must prioritize transparency, accountability, and ethical considerations to build public trust in AI systems. In this respect, EU is taking proactive steps to ensure that AI is harnessed for the greater good, as one of the key implications of the EU AI Act lies in its impact on policy-making activities. The legislation provides a set of guidelines that promote fairness, privacy, and non-discrimination in AI systems. This not only safeguards individual rights but also fosters trust in the technology, a crucial aspect for its widespread acceptance.

Moreover, the EU AI Act introduces a risk-based approach categorizing AI systems into different levels of risk. In this respect, policymakers must adapt their strategies to navigate this risk-based framework, ensuring that regulatory measures align with the level of potential harm associated with specific AI applications. Therefore, to ensure effective implementation, policymakers should adhere to a set of guidelines, including:

  1. Promoting Stakeholder Engagement: Policymakers should actively engage stakeholders, including industry representatives, civil society organizations, and academia, throughout the standardization process. This collaborative approach not only ensures a dynamic and competitive AI landscape, but also allows policymakers to stay informed about the latest advancements, enabling them to make informed decisions that align with the rapidly evolving nature of AI technologies.
  2. Investing in Research and Innovation: Governments and the private sector should allocate resources to support research and innovation in AI standardization. Funding initiatives aimed at developing cutting-edge methodologies, tools, and best practices for assessing AI systems’ compliance can drive technological advancements and maintain Europe’s competitiveness in the global AI landscape.
  3. Capacity Building and Awareness: Enhancing awareness and understanding of AI standards among businesses, policymakers, and end-users is essential. Capacity-building initiatives, such as training programs, workshops, and educational campaigns, can empower stakeholders to navigate the complexities of AI regulation and compliance, fostering a culture of responsible AI adoption.
  4. Facilitating Access to Standards: Ensuring accessibility and affordability of AI standards is crucial, particularly for small and medium-sized enterprises (SMEs) and startups. Governments can support initiatives that provide guidance, tools, and resources for SMEs and startups to navigate compliance requirements effectively. Additionally, promoting open access to standards and facilitating their dissemination can foster innovation and competition in the AI market.
  5. Monitoring and Evaluation: Establishing mechanisms for continuous monitoring and evaluation of AI standards’ effectiveness is essential. Regular assessments enable policymakers to identify emerging challenges, gaps, and opportunities for improvement, facilitating iterative refinement of the regulatory framework. Collaboration with independent experts and research institutions can provide valuable insights into the real-world impact of AI standards on various stakeholders.

In conclusion, the Europe’s new AI Act heralds a new era of responsible AI governance. Policymakers must now adapt to a landscape where ethical considerations, risk assessments, and collaboration are at the forefront. By embracing the power of AI within a transparent and ethical framework, policymakers can navigate the complexities of our rapidly evolving world while upholding democratic values. As Europe leads the way, the global community watches, recognizing that the responsible integration of AI is not just a technological milestone, but a pivotal step towards a future where innovation and democracy go hand in hand.

The post Unleashing the potential of AI: EU AI Act and its policy implications first appeared on AI4Gov.

]]>
https://ai4gov-project.eu/2024/03/11/unleashing-the-potential-of-ai-eu-ai-act-and-its-policy-implications/feed/ 0 876
AI, Big Data and Democracy – AI UK Fringe event https://ai4gov-project.eu/2024/03/04/ai-big-data-and-democracy-ai-uk-fringe-event/?utm_source=rss&utm_medium=rss&utm_campaign=ai-big-data-and-democracy-ai-uk-fringe-event https://ai4gov-project.eu/2024/03/04/ai-big-data-and-democracy-ai-uk-fringe-event/#respond Mon, 04 Mar 2024 09:08:22 +0000 https://ai4gov-project.eu/?p=870 Are you intrigued by the intersection of Artificial Intelligence and Deliberative Democracy? Join us for an in-depth online workshop as part of the AI UK Fringe 2024. 📢 March 7, 12:00 – 16:00 CET Online and Free! (Registration required: https://forms.office.com/e/X4bwv2ZzpJ) What to Expect

The post AI, Big Data and Democracy – AI UK Fringe event first appeared on AI4Gov.

]]>
Are you intrigued by the intersection of Artificial Intelligence and Deliberative Democracy? Join us for an in-depth online workshop as part of the AI UK Fringe 2024.

📢 March 7, 12:00 – 16:00 CET

Online and Free! (Registration requiredhttps://forms.office.com/e/X4bwv2ZzpJ)

What to Expect

  • Expert Keynotes: Hear from leading researchers on AI’s impact on democracy, online deliberation, trustworthy AI, and the skills needed to navigate this landscape
  • Interactive Demos: Get hands-on experience with cutting-edge tools designed to support healthy democratic discourse using AI-powered technologies
  • Thought-Provoking Discussions: A lively panel and opportunities to shape the direction of responsible AI in democratic processes

The post AI, Big Data and Democracy – AI UK Fringe event first appeared on AI4Gov.

]]>
https://ai4gov-project.eu/2024/03/04/ai-big-data-and-democracy-ai-uk-fringe-event/feed/ 0 870
WOULD YOU TRUST A LARGE LANGUAGE MODEL TO HELP EXPLAIN INSTITUTIONAL PROCESSES? (IBM) https://ai4gov-project.eu/2024/02/01/would-you-trust-a-large-language-model-to-help-explain-institutional-processes-ibm/?utm_source=rss&utm_medium=rss&utm_campaign=would-you-trust-a-large-language-model-to-help-explain-institutional-processes-ibm https://ai4gov-project.eu/2024/02/01/would-you-trust-a-large-language-model-to-help-explain-institutional-processes-ibm/#respond Thu, 01 Feb 2024 11:00:48 +0000 https://ai4gov-project.eu/?p=865 You are surprised to see that your tax refund application is being stuck “in review” for nearly a month, while your father-in-law received his refund within two weeks after his submission. An option to chat with a government representative could potentially address the delay by identifying any minor issues or missing documents. However, is it […]

The post WOULD YOU TRUST A LARGE LANGUAGE MODEL TO HELP EXPLAIN INSTITUTIONAL PROCESSES? (IBM) first appeared on AI4Gov.

]]>
You are surprised to see that your tax refund application is being stuck “in review” for nearly a month, while your father-in-law received his refund within two weeks after his submission. An option to chat with a government representative could potentially address the delay by identifying any minor issues or missing documents. However, is it possible to rely on such an automated service to effectively determine and explain the true reasons for such an unexpected delay? Can organizations rely on recent advances with large language models (LLMs) to drive the deployment of such a service? Would you trust an automatic explanation given to you regarding the delayed response to your tax return?

LLMs are a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks. They are trained on a vast amount of text to interpret and generate human-like textual content. While the adoption and usage of LLMs by organizations to automate many aspects of their operations is growing rapidly, this is also accompanied by a certain degree of doubt related to the tendency of LLMs to produce what we call “hallucinations,” or incorrect information, due to a lack of inherent capacity to reason with understanding. In a recent paper, the authors tackle this issue and try to answer the question “how well can LLMs explain business processes?”.

Alongside the rapid development of AI-based models there is an inherent trust issue due to the “black box” nature of these models that hinders their wide adoption. This lack of trust has given birth over the past years to Explainable AI (XAI) models that have been developed to explain decisions and actions that are powered by AI models. Such methods are aimed to give users a better understanding of the inner working of AI, trying to ensure that the model is making its decision adequately and reliably, with little to no inherent biases, and to give its users assurance that the AI model is not going to fail upon encountering some unforeseen circumstances in the future.

When applied to business processes, XAI techniques aim at explaining the different factors affecting a certain condition in a business process. In our previous example of the tax refund, an XAI explanation could point to reasons for the delay in the tax refund application such as missing information in the application, tax credit, or tax amount.

However, contemporary XAI techniques are not adequate enough to produce explanations faithful and correct when applied to business processes as they generally fail to express the business process model constraints;  they don’t usually include the richness of information about the contextual situations that affect process outcomes; they don’t reflect the true causal execution dependencies among the activities in the business process; and they don’t make sense enough to be interpretable by users. After all, explanations are usually not given in enough of a human-interpretable form that can ease the understanding by humans.

To this end, IBM Research, a partner in the EU AI4GOV project, introduces Situation-aware eXplainability (SAX) as a framework for generating explanations for business processes that are meant to address these shortcomings. More specifically, an explanation generated with the use of the framework, or a “SAX explanation” in short, is a causally-sound explanation that takes into account the process context in which some condition occurred. Causally sound means that the explanation generated provides an account of why the condition had occurred in a faithful and logical entailment that reflects the genuine chain of business process executions yielding the condition. The context includes knowledge elements that were originally implicit, or part of the surrounding system, yet affected the choices that have been made during the process execution. The SAX framework provides a set of services that aid with the automatic derivation of SAX explanations leveraging existing LLMs. By using these services, process-specific knowledge can be prompted as an LLM input preceding the interaction with it. This library is expected for release as an open source tool at the end of the AI4GOV project.

To assess the perceived quality of LLM-generated explanations about different conditions in business processes, IBM Research has rigorously developed a designated scale and conducted a corresponding user study. In the study, users were asked to rate different quantitative measures about the quality of a variety of LLM explanations on a Likert scale. These explanations were derived by the LLM subsequent to different combinations of knowledge inputs that were introduced to the LLM prompt before the interaction. Our findings show that the input presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations that had better-perceived fidelity. This improvement is moderated by the perception of trust and curiosity. At the same time, this improvement comes at the cost of the perceived interpretability of the explanation.

The overall approach of the generation of explanations by an LLM, and the resulting user perception of the output, is depicted in the figure below.

The post WOULD YOU TRUST A LARGE LANGUAGE MODEL TO HELP EXPLAIN INSTITUTIONAL PROCESSES? (IBM) first appeared on AI4Gov.

]]>
https://ai4gov-project.eu/2024/02/01/would-you-trust-a-large-language-model-to-help-explain-institutional-processes-ibm/feed/ 0 865
Deliverable D4.3: Policies Visualization Services V1 https://ai4gov-project.eu/2024/01/12/deliverable-d4-3-policies-visualization-services-v1/?utm_source=rss&utm_medium=rss&utm_campaign=deliverable-d4-3-policies-visualization-services-v1 https://ai4gov-project.eu/2024/01/12/deliverable-d4-3-policies-visualization-services-v1/#respond Fri, 12 Jan 2024 11:51:45 +0000 https://ai4gov-project.eu/?p=861 D4.3 Policies Visualization Services V1 introduces the Visualization Workbench, a key component of the AI4Gov project. It serves as a centralized platform for innovative solutions at the intersection of artificial intelligence and governance. Positioned as the project’s front end, it hosts seven distinct use cases across three different domains: Water Management , Sustainable Development and Tourism. Each […]

The post Deliverable D4.3: Policies Visualization Services V1 first appeared on AI4Gov.

]]>
D4.3 Policies Visualization Services V1 introduces the Visualization Workbench, a key component of the AI4Gov project. It serves as a centralized platform for innovative solutions at the intersection of artificial intelligence and governance. Positioned as the project’s front end, it hosts seven distinct use cases across three different domains: Water Management , Sustainable Development and Tourism. Each use case is showcased as a dynamic interface within the contexts of the Deliverable, highlighting the innovative fusion of cutting-edge technology and domain-specific knowledge.

Emphasizing in ethical AI deployment, the Workbench integrates tools like the Bias Detector Toolkit and Policy Recommendation Toolkit, showcasing the pivotal role of visualization services in AI policy formulation and bias detection. These services provide tools for comprehensive understanding and informed decision-making, fostering transparency and accountability. In bias detection, visualization services play a crucial role in surfacing latent biases within AI algorithms, but also enabling a proactive approach to identify, understand, and mitigate biases. This emphasizes the importance of ethical AI practices, highlighting their potential to shape policies aligned with fairness, equity, and societal well-being.

The post Deliverable D4.3: Policies Visualization Services V1 first appeared on AI4Gov.

]]>
https://ai4gov-project.eu/2024/01/12/deliverable-d4-3-policies-visualization-services-v1/feed/ 0 861
Παρουσιάσεις Ημερίδας “Αξιόπιστη Τεχνητή Νοημοσύνη για διαφανή Δημόσια Διακυβέρνηση” https://ai4gov-project.eu/2024/01/03/parousiaseis-imeridas/?utm_source=rss&utm_medium=rss&utm_campaign=parousiaseis-imeridas Wed, 03 Jan 2024 12:59:47 +0000 https://ai4gov-project.eu/?p=838 Τον περασμένο Νοέμβριο, το AI4Gov διοργάνωσε ημερίδα με δύο πάνελ συζητήσεων και ένα εκπαιδευτικό εργαστήριο για τη χρήση της τεχνητής νοημοσύνης στη δημόσια διακυβέρνηση, αποκλειστικά για Ελληνικό ακροατήριο. Στην υβριδική εκδήλωση συμμετείχαν περισσότερα από 160 άτομα, μεταξύ των οποίων μέλη δημοτικών συμβουλίων και της ακαδημαϊκής κοινότητας, εμπειρογνώμονες και ενώσεις πολιτών, ενώ την εκδήλωση καλωσόρισαν εκπρόσωποι […]

The post Παρουσιάσεις Ημερίδας “Αξιόπιστη Τεχνητή Νοημοσύνη για διαφανή Δημόσια Διακυβέρνηση” first appeared on AI4Gov.

]]>
Τον περασμένο Νοέμβριο, το AI4Gov διοργάνωσε ημερίδα με δύο πάνελ συζητήσεων και ένα εκπαιδευτικό εργαστήριο για τη χρήση της τεχνητής νοημοσύνης στη δημόσια διακυβέρνηση, αποκλειστικά για Ελληνικό ακροατήριο.

Στην υβριδική εκδήλωση συμμετείχαν περισσότερα από 160 άτομα, μεταξύ των οποίων μέλη δημοτικών συμβουλίων και της ακαδημαϊκής κοινότητας, εμπειρογνώμονες και ενώσεις πολιτών, ενώ την εκδήλωση καλωσόρισαν εκπρόσωποι του Υπουργείου Ψηφιακής Διακυβέρνησης, καθώς και της Περιφερειακής και Τοπικής Αυτοδιοίκησης.

Οι παρουσιάσεις της ημερίδας είναι διαθέσιμες στο κανάλι μας στο YouTube, ενώ ετοιμάζεται και εκπαιδευτικό υλικό με βάση τις απαντήσεις των συμμετεχόντων στο εκπαιδευτικό εργαστήριο.

The post Παρουσιάσεις Ημερίδας “Αξιόπιστη Τεχνητή Νοημοσύνη για διαφανή Δημόσια Διακυβέρνηση” first appeared on AI4Gov.

]]>
838
Holistic Regulatory Framework: AI4Gov’s tool for ethical and democratic AI (VIL) https://ai4gov-project.eu/2023/12/14/ai4govs-tool-for-ethical-and-democratic-ai/?utm_source=rss&utm_medium=rss&utm_campaign=ai4govs-tool-for-ethical-and-democratic-ai https://ai4gov-project.eu/2023/12/14/ai4govs-tool-for-ethical-and-democratic-ai/#respond Thu, 14 Dec 2023 14:45:08 +0000 https://ai4gov-project.eu/?p=832 Trust. A complex concept that affects all human relationships. But what happens when you have to trust an AI tool? Who makes the rules and how can these rules ensure that the tool will respect your (human) rights and avoid biases? That said, to develop a responsible and ethical AI tool, it is vital to […]

The post Holistic Regulatory Framework: AI4Gov’s tool for ethical and democratic AI (VIL) first appeared on AI4Gov.

]]>
Trust. A complex concept that affects all human relationships. But what happens when you have to trust an AI tool? Who makes the rules and how can these rules ensure that the tool will respect your (human) rights and avoid biases? That said, to develop a responsible and ethical AI tool, it is vital to explain how it works and how it is regulated. In the case of AI4Gov, this will be taken care of by the Holistic Regulatory Framework (HRF).

The relevance of bias to human rights is profound. Human rights are inherent to all individuals, regardless of their background, identity, or characteristics. However, bias can undermine these rights, impeding equal access to opportunities, resources, and fair treatment. It can also perpetuate inequality, discrimination, and marginalisation, particularly for underrepresented groups.​ ​When bias comes into play, it can have far-reaching consequences for human rights. Bias can lead to discrimination, inequality, and the violation of basic rights. Ultimately, it can result in denial of opportunities or equal access to resources and services. 

The HRF refers to a comprehensive structure and a set of guidelines intended to govern the use of AI and Big Data in the context of democracy and EU values. This framework aims to ensure that AI technologies, especially when applied to governmental processes or services, adhere to fundamental rights and values, do not perpetuate bias or discrimination, and respect existing laws, regulations, and ethical standards. It seeks to address these concerns, ensuring that the platform is just, equitable, and compliant with prevailing standards and laws.

To ascertain a comprehensive understanding of the challenges and opportunities in integrating AI and Big Data in governance and policy-making, a systematic procedure was initiated with the objective of constructing the AI-based Democracy HRF. The core procedure revolved around implementing the Delphi method to undertake a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the HRF. Afterwards, a panel of experts with proficiency in AI governance and policy-making was chosen. Leveraging the insights from the SWOT analysis, preliminary guidelines, and recommendations for the HRF were formulated. The focus was to ensure that the deployment of AI and Big Data in policy management remained democratic, transparent, and aligned with ethical standards.  

Currently, the main aspects of AI4Gov’s HRF are:  

Defining Bias and Discrimination

The HRF provides a holistic definition of bias, discrimination, unfairness, and non-inclusiveness. This involves a) aligning both technical definitions (from the AI/tech side) with social science definitions and b) understanding how these two aspects interact when AI and Big Data technologies are developed or deployed.  

Ensuring Compliance with EU Values and Regulations

The framework seeks to evaluate the AI4Gov Platform’s alignment with current EU regulations concerning fundamental rights and values. This ensures that the technologies developed and deployed respect the rights of citizens and adhere to important regulations like the General Data Protection Regulation (GDPR). 

Qualitative Analysis of Rights and Values

To ensure that the HRF takes under consideration all potential forms of bias, a qualitative analysis is being held, focusing on understanding how traditional (non-AI) biases might currently affect the rights and values of certain citizen groups, such as ethnic minorities, migrants, religious groups, and persons with disabilities. This part of the HRF aims to uncover areas where discrimination might be overlooked, especially in relation to existing EU regulations on human rights protection.  

Specification and Design of the Framework

The HRF provides a mapping of the existing processes to a policy management lifecycle and highlights enhancements through proposed AI solutions. The HRF will be based on qualitative analyses of fundamental rights, EU values, legal activities, and ethical protocols. It will ensure that citizens are protected from potential abuses resulting from AI and Big Data use and that the framework adheres to existing laws, protocols, and ethical recommendations.  

Reference Architecture

The HRF acts as the ethical and regulatory compass, ensuring that the reference architecture developed is not only technologically robust but also ethically sound, legally compliant, and fundamentally aligned with the overarching goals and values of the AI4Gov project. The HRF will play a role in how different components of the architecture will interact with each other, both in terms of data flow and functional hierarchy, ensuring that the interactions remain compliant with the ethical, legal, and functional standards the framework sets forth. In the realm of AI and Big Data, the flow and processing of information are of paramount importance. The HRF provides guidelines on how data should be collected, processed, stored, and shared, ensuring that privacy, fairness, and security are maintained throughout. Moreover, it needs to be noted that the HRF is not just a set of guidelines or a checklist for the project. It will serve as a foundational blueprint for the entirety of the AI4Gov architecture. By aligning with the HRF, the project ensures that the architecture, and by extension, all its subsequent developments and deployments, operate within a framework that respects human rights, EU values, and ethical considerations. 

In essence, the HRF in AI4Gov provides a thorough, multi-faceted approach to ensuring that AI and Big Data technologies are developed and used responsibly, ethically, and in line with the fundamental rights and values of the European Union (EU). 

The post Holistic Regulatory Framework: AI4Gov’s tool for ethical and democratic AI (VIL) first appeared on AI4Gov.

]]>
https://ai4gov-project.eu/2023/12/14/ai4govs-tool-for-ethical-and-democratic-ai/feed/ 0 832
AI4Gov educational workshop on the role and impact of Artificial Intelligence in local government and governance https://ai4gov-project.eu/2023/11/14/ai4gov-educational-workshop-on-the-role-and-impact-of-artificial-intelligence-in-local-government-and-governance/?utm_source=rss&utm_medium=rss&utm_campaign=ai4gov-educational-workshop-on-the-role-and-impact-of-artificial-intelligence-in-local-government-and-governance Tue, 14 Nov 2023 16:18:13 +0000 https://ai4gov-project.eu/?p=808 (English message follows) Την προσεχή Τρίτη 21 Νοεμβρίου, το AI4Gov διοργανώνει την 1η ημερίδα-εκπαιδευτικό εργαστήριο με θέμα τον ρόλο και την επίδραση της Τεχνητής Νοημοσύνης στην τοπική αυτοδιοίκηση και τη διακυβέρνηση. Την εκδήλωση θα φιλοξενήσει το Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης αλλά θα υπάρχει και η δυνατότητα παρακολουθήσεως μέσω διαδικτύου. Το πρόγραμμα περιλαμβάνει δύο πάνελ συζητήσεως, με συμμετέχοντες από την Τοπική Αυτοδιοίκηση: […]

The post AI4Gov educational workshop on the role and impact of Artificial Intelligence in local government and governance first appeared on AI4Gov.

]]>
(English message follows)

Την προσεχή Τρίτη 21 Νοεμβρίου, το AI4Gov διοργανώνει την 1η ημερίδα-εκπαιδευτικό εργαστήριο με θέμα τον ρόλο και την επίδραση της Τεχνητής Νοημοσύνης στην τοπική αυτοδιοίκηση και τη διακυβέρνηση. Την εκδήλωση θα φιλοξενήσει το Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης αλλά θα υπάρχει και η δυνατότητα παρακολουθήσεως μέσω διαδικτύου.

Το πρόγραμμα περιλαμβάνει δύο πάνελ συζητήσεως, με συμμετέχοντες από την Τοπική Αυτοδιοίκηση:

  1. Η Τεχνητή Νοημοσύνη στην Τοπική Διακυβέρνηση, Προοπτικές και Προκλήσεις, με συντονιστή το Υπουργείο Τουρισμού
  2. Προσβάσιμη Τεχνητή Νοημοσύνη Χωρίς Αποκλεισμούς για τους Πολίτες, με συντονιστή την ViLabs

Την ημερίδα θα χαιρετίσουν εκλεκτοί καλεσμένοι, προεξέχοντος του Υπουργού Ψηφιακή Διακυβέρνησης, κ. Δημήτρη Παπαστεργίου.

Βρείτε εδώ το αναλυτικό πρόγραμμα και εδώ τη φόρμα συμμετοχής.


Next Tuesday, November 21, AI4Gov is organizing its 1st workshop-educational workshop on the role and impact of Artificial Intelligence in local government and governance. The event will be hosted by the Aristotle University of Thessaloniki, but it will also be possible to watch it online.

The program includes two discussion panels, with participants from the Local Government:

  1. Artificial Intelligence in Local Governance, Perspectives and Challenges, moderated by the Ministry of Tourism
  2. Accessible and Inclusive Artificial Intelligence for Citizens, coordinated by ViLabs

The conference will be greeted by distinguished guests, prominently the Minister of Digital Governance, Mr. Dimitris Papastergiou.

Find here the detailed program and here the participation form. Please note that the event will be held in Greek.

The post AI4Gov educational workshop on the role and impact of Artificial Intelligence in local government and governance first appeared on AI4Gov.

]]>
808
[AI4Gov] Deliverable 3.1: Decentralized Data Governance, Provenance and Reliability V1 https://ai4gov-project.eu/2023/11/14/ai4gov-d3-1/?utm_source=rss&utm_medium=rss&utm_campaign=ai4gov-d3-1 Tue, 14 Nov 2023 16:06:45 +0000 https://ai4gov-project.eu/?p=805 D3.1 Decentralized Data Governance, Provenance and Reliability V1 is the first iteration of two deliverables, which aim to define and implement the framework that realises the data governance of AI4Gov both at the level of specifications and at the technical level. Since AI4Gov aims at transparency in data sources and, consequently, in AI-generated bias reports, it […]

The post [AI4Gov] Deliverable 3.1: Decentralized Data Governance, Provenance and Reliability V1 first appeared on AI4Gov.

]]>
D3.1 Decentralized Data Governance, Provenance and Reliability V1 is the first iteration of two deliverables, which aim to define and implement the framework that realises the data governance of AI4Gov both at the level of specifications and at the technical level. Since AI4Gov aims at transparency in data sources and, consequently, in AI-generated bias reports, it is imperative that:  a) data points and justifications can be traced back to the original sources and b) business logic that is common to all members of a consortium can be executed in a way that can be validated and approved by everyone.

Blockchain technology is a key enabler for storing data in a decentralised manner and executing mutually endorsed business logic in terms of a smart contract. D3.1 identifies the key enablers of implementing such a governance model both off-chain and on-chain using the HyperLedger Fabric framework. It identifies the fundamental mechanisms for anchoring data on the blockchain and defining and modifying blockchain governance policies both for data manipulation and for smart contract definition and execution. Furthermore, it establishes GDPR compliance by justifying that all tenets of GDPR are respected by the data governance framework, with special emphasis on the “right to be forgotten” and how this right can be enforced despite the immutable nature of the blockchain.

The post [AI4Gov] Deliverable 3.1: Decentralized Data Governance, Provenance and Reliability V1 first appeared on AI4Gov.

]]>
805
Leveraging Self-Sovereign Identity and European Blockchain Services Infrastructure for Transparency in AI (UBI) https://ai4gov-project.eu/2023/11/14/leveraging-self-sovereign-identity-and-european-blockchain-services-infrastructure-for-transparency-in-ai-ubi/?utm_source=rss&utm_medium=rss&utm_campaign=leveraging-self-sovereign-identity-and-european-blockchain-services-infrastructure-for-transparency-in-ai-ubi Tue, 14 Nov 2023 16:02:25 +0000 https://ai4gov-project.eu/?p=799 As blockchain technology finds applications in use cases involving Self-Sovereign Identities (SSI), citizens and public organisations can establish trust in a transparent and direct way without the need for intermediaries. This direct channel of verifying identity and transparency can also extend to use cases involving AI. In the era of generative AI, it is important […]

The post Leveraging Self-Sovereign Identity and European Blockchain Services Infrastructure for Transparency in AI (UBI) first appeared on AI4Gov.

]]>
As blockchain technology finds applications in use cases involving Self-Sovereign Identities (SSI), citizens and public organisations can establish trust in a transparent and direct way without the need for intermediaries. This direct channel of verifying identity and transparency can also extend to use cases involving AI. In the era of generative AI, it is important to be able to trace back AI-generated content to the original creator. The benefit of this is twofold: a) biased content and deep fakes can be detected more easily and b) creators are protected from copyright infringement. Leveraging SSI to provide digital signatures in AI-generated content and demanding that such content is digitally signed achieves both the above objectives and provides an extra layer of protection to end users and organisations.

Self-Sovereign Identity in AI

Self-sovereign identity (SSI) allows individuals to establish and manage their own identity without the need for identity providers. Under the SSI model, identities, whether they belong to an individual or correspond to an organisation, are handled exclusively by the identity holder. Data ownership is proved via the SSI, and transactions that involve presentation of proof, validation of credentials and even transfer of asset ownership do not require a central authority to act as an arbiter between transacting partners. As SSIs are, in practice, decentralized identities, it is evident that blockchain technology is a big enabler for establishing an SSI.

Apart from data ownership and control, which are some of the big benefits of SSI, its application can be easily seen in the field of AI and the way data subjects and content creators can use it for better data control.

First of all, by using third parties for identity provision and verification, data subjects can have their identity details stored at various repositories, the number of which may be difficult to track. While, in theory, they can use the right to be forgotten to remove the relevant entries, their data may meanwhile be used by a data processor and embedded into an AI model that is used, for example, for advertising purposes. It is extremely difficult to have control over this information and even track the various knowledge bases where the subject’s information has been leaked.

Moreover, from a creator’s point of view, generative AI has been increasingly used to create content. Copyright of content can be more easily proved and controlled via an SSI scheme, in which a user signs the document with their SSI keys without delegating this functionality to third parties, which can make dispute claims hard to resolve.

In a more drastic step, the SSI can be applied to the AI models themselves; that is, an AI-generated report can be signed by an SSI that the AI has issued itself. In this way, bias reports will be traced back to the AI that has generated them, thereby increasing transparency. In a similar context, deep fakes based on AI would be detected more efficiently by requiring reports to be signed and tracing back the SSI to confirm the creator’s identity, whether this is a real person or an AI.

As AI4Gov will investigate the development of AI models for bias detection and policy-making, it is very important that these models can be easily linked to the identities of their creators but also to the ones that invoke instances of these models for decision-making. This ability establishes both ownership and accountability, which are two of the main goals of SSI.

Decentralized Applications (dApps) under SSI

AI4Gov will offer organisations the ability to run smart contracts that will execute a mutually endorsed business logic to process data and generate bias reports and policy recommendations. These smart contracts will be accessed via decentralized applications (dApps). To make sure that these dApps are installed and run in a manner that the user completely controls, the dApps will be implemented using SSI; such dApps are often referred to as SSApps and are a key concept of the OpenDSU framework, that the AI4Gov will utilise.

A SSApp allows users not only to control their data but the whole execution environment. In an SSApp, both the running code and the referenced data are anchored under the user’s identity, who has complete control of what is stored but also of what exactly is installed and run on her/his device.

EBSI and Self-Sovereign Identity

Recognizing the transformative potential of blockchain technology, the European Union (EU) has taken significant strides in establishing the European Blockchain Services Infrastructure (EBSI). Created in 2018, the European Blockchain Services Infrastructure (EBSI) is the EU’s “official” blockchain infrastructure. It operates with nodes across EU countries with the goal of offering its services to organisations and citizens across Europe. With its promise of enhancing transparency, security, and efficiency, EBSI stands as a testament to Europe’s commitment to fostering innovation and digitalization. EBSI operates with the fundamental objective of fostering trust, transparency, and security in digital transactions while promoting interoperability across various sectors.

Although the main use cases demonstrated so far fall under the credential verification category of use cases, it is expected that EBSI will also be adopted for use cases involving supply chains, trust environments, etc. One initiative that tries to integrate SSI with the EBSI is the European Self Sovereign Identity Framework (ESSIF), which aims to become one of the Connecting Europe Facilities (CEF) components. ESSIF’s main objective is to implement SSI capability by leveraging EBSI. AI4Gov will implement its wallet application following the ESSIF and the SSApps principles and will demonstrate that the wallet satisfies the EBSI conformance test requirements.

In this manner, AI4Gov aspires to realise one of the first use cases, which demonstrates the ability to integrate AI and SSI in a manner that is ready to adopt using the EBSI infrastructure.

The  EBSI/ESSIF Framework. (Figure by the European Commission)

The post Leveraging Self-Sovereign Identity and European Blockchain Services Infrastructure for Transparency in AI (UBI) first appeared on AI4Gov.

]]>
799