Trustworthy Artificial Intelligence
Artificial Intelligence (AI) is transforming various sectors, driving innovation, and enhancing efficiencies. However, to harness its full potential, AI-based systems and techniques must be developed in a safe, secure, and responsible manner. This involves a clear identification of risks and implementation of preventative approaches, adhering to regulations like the AI Act. The development of AI systems should not only focus on technological advancements but also on addressing ethical considerations, societal impacts, and compliance with frameworks established by the European Innovation Council (EIC).
Depending on the type of research being proposed—ranging from fundamental studies to pre-competitive developments—AI-based systems should be aligned with several critical objectives:
- Technical Robustness: AI systems must be technically resilient, accurate, and reproducible. They should not only provide robust performance but also be equipped to handle potential failures, inaccuracies, and errors. These features should be proportionate to the assessed risks posed by the AI system or technique.
- Social Responsibility: It is vital that AI technologies consider the context and environment in which they operate, ensuring they align with societal norms and values. This aspect emphasizes the importance of cultural considerations and ethical frameworks in the deployment of AI solutions.
- Reliability: AI systems should operate as intended, minimizing unintentional harm and preventing any unacceptable effects. It is crucial that these systems provide suitable explanations for their decision-making processes, particularly in scenarios where they significantly impact people’s lives.
All proposals related to the development, use, and deployment of AI-based systems must demonstrate that they uphold a variety of principles, including:
- Human Agency and Oversight: Ensuring that humans remain in control of AI systems.
- Fairness, Gender Neutrality, and Diversity: Actively promoting equality and preventing discrimination in AI applications.
- Transparency and Accountability: Maintaining clear communication of AI systems’ functions and decisions to stakeholders.
- Societal and Environmental Well-being: Focusing on sustainable and beneficial outcomes for society and the environment.
Technical robustness and ethical considerations are essential for securing EIC funding, which is vital for startups and SMEs working on AI innovations. The EIC Accelerator offers non-dilutive funding opportunities for startups engaged in deep tech innovations, facilitating their growth while ensuring adherence to responsible AI practices.
Technology Readiness Levels (TRLs)
Where the specific call/topic conditions necessitate the application of Technology Readiness Levels (TRL), applicants must refer to the definitions outlined in the Glossary. Understanding TRLs is crucial as they assess the maturity of the technology, which can influence the success rate of an EIC Accelerator application. This maturity is particularly significant when competing for EIC Accelerator funding.
Ethics in AI Development
All projects aimed at developing AI systems must comply with ethical principles that reflect the highest standards of research integrity. This includes adhering to applicable EU, international, and national laws. As part of the application process for EIC funding, applicants must complete an ethics self-assessment.
Projects that present potential ethics issues will undergo a rigorous ethics review to obtain funding authorization. This review process ensures that any ethical implications are addressed appropriately, aligning with the EU’s commitment to responsible innovation.
For more information, see How to complete your ethics self-assessment.
Applying for EIC Accelerator Funding
Applying for EIC Accelerator funding requires a deep understanding of the program’s evaluation criteria, which include innovation potential, market opportunity, and the capacity of the team. Applicants should consider seeking EIC Accelerator coaching services to enhance their proposals effectively and understand best practices for application writing. Furthermore, aligning projects with the goals of the Horizon Europe EIC framework can significantly boost the chances of securing grants.
Startups should note the EIC Accelerator application deadline 2025 to ensure timely submission. By leveraging resources such as case studies of successful EIC Accelerator startups and insights into the EIC Transition program, applicants can refine their approach and improve their proposals.
In conclusion, the responsible development of trustworthy AI is crucial for advancing technology while safeguarding societal values. By adhering to ethical standards and leveraging funding opportunities through the EIC, AI innovators can pave the way for a sustainable and equitable future.