Requirements for an AI Project
Successful AI projects require proper requirements management. This article identifies critical requirements throughout the AI project's lifecycle. This is to ensure that the AI system is delivered in accordance with organisational goals and ethical standards.

TABLE OF CONTENTS
Objectives vs. Requirements
Project objectives outline a project's expected outcomes or goals. They are broad statements that outline what the project wants to achieve while remaining consistent with the organisation's strategic goals. In contrast, project requirements describe the exact conditions or skills that must be met in order to attain these objectives. They are the actionable steps or criteria required to achieve the project's objectives. The link between objectives and requirements is hierarchical, with objectives providing the why and requirements detailing the what.
The SMART criteria, which stand for Specific, Measurable, Achievable, Relevant, and Time-bound, are commonly used to design objectives. Defining clear objectives is an important initial step in an AI project to ensure success and limit scope creep.
AI Impact Assessment
An impact assessment is crucial for managing AI system requirements, as it allows organisations to evaluate the potential effects of AI systems on individuals, society, and the environment before their deployment. This comprehensive process not only establishes robust requirements but also helps safeguard the benefits of AI technologies while avoiding or mitigating their risks, thus promoting the responsible development and use of AI. Important components are:
- Accountability: Assign an impact manager or ethics committee to oversee the evaluation of both positive and negative effects that AI systems might have. This includes identifying, evaluating, and implementing strategies to mitigate risks related to ethical considerations, such as privacy breaches, discrimination, and other societal harms.
- Robust risk management: Implement structured risk management frameworks specifically designed for AI technologies. These should address both the likelihood and severity of potential impacts, enabling proactive management of risks. Classifying AI risks, as required by regulations like the EU AI Act, is a fundamental aspect of this process.
- Iterative process: Approach impact assessments as an ongoing process rather than a one-time event. Regular updates and reviews can adapt to new information or changes in technology and societal expectations, maintaining relevance and effectiveness.
- Adoption of international standards: Align the impact assessment with international standards and guidelines for AI to ensure consistency and quality.
Requirements the Throughout Lifecycle
1. Project Team
Assembling a diverse team and defining clear roles and responsibilities for every team member are key steps in an AI project to avoid bias, drive transparency, and compliance.
2. Data Requirements
Data requirements are the cornerstone of building effective and reliable AI systems. These directly affect the accuracy and reliability of AI applications, impacting everything from user trust to regulatory compliance. By rigorously addressing these requirements, developers can mitigate risks, enhance performance, and uphold ethical standards, ensuring that AI systems perform as intended in diverse and dynamic environments. Some important data-related requirements are:
- Data quality
- Accuracy: Ensure the data is precise, correct, and free from errors to establish a trustworthy foundation for AI models.
- Completeness: Address missing values and ensure the dataset is comprehensive, filling gaps that could impair model training.
- Consistency: Maintain consistency in data over time and across different sources to ensure stable and predictable model performance.
- Bias minimisation: Actively implement methods to reduce biases in the dataset, fostering fairness and enhancing the ethical stature of AI systems.
- Data quantity
- Volume: Gather a sufficient amount of data to effectively train models, ensuring they can handle varied and complex scenarios.
- Data minimisation: Collect only the data necessary for the project's objectives to maintain efficiency and respect user privacy.
- Diversity: Ensure the dataset reflects a variety of scenarios and cases the model will encounter, supporting the creation of versatile and adaptive AI solutions.
- Data management
- Data governance: Establish governance policies to manage data quality, access, storage, and compliance, ensuring organisational and regulatory standards are met.
- Data privacy and security: Implement robust security measures to protect data privacy, adhering to regulations such as GDPR.
- Consent management: Ensure that data is gathered with informed consent, and that the consent management process is robust and in accordance with relevant privacy regulations.
- Data storage and accessibility: Store data in a secure, yet accessible manner, balancing security with ease of use.
- Data preprocessing
- Cleaning: Remove noise and correct inconsistencies in the dataset to enhance the accuracy and reliability of the outputs.
- Normalising: Scale input variables to a standard range to ensure models operate under consistent conditions.
- Data augmentation: Enhance the dataset with artificially created data based on existing information to improve model robustness against unforeseen variables.
- Ethical considerations
- Fairness: Establish requirements for data collection procedures to be representative and devoid of biases that may affect fairness, ensuring that AI applications treat all users equitably.
- Transparency: Provide detailed documentation on data origins, processing methods, and algorithms used, making it possible to trace back the inputs and operations that lead to specific outputs, thereby upholding the integrity and explainability of AI systems.
3. Algorithm Selection
The algorithm selection includes a set of important requirements meant to best align the training algorithms with the project's specific demands. By following these guidelines, project teams can guarantee that the chosen model is technically competent while also being consistent with broader corporate objectives, ethical standards, and operational restrictions. The key requirements include:
- Understanding the problem and data: Project teams must identify the type of problem (e.g., classification, regression, clustering) and thoroughly analyse the characteristics of the data, including size, quality, dimensionality, and linearity, to ensure the chosen algorithm is suitable for its task.
- Specific algorithm assumptions: Algorithms selected must align with the inherent assumptions regarding the data they are intended to process, including assumptions about feature independence and distribution types, to optimise model accuracy and performance.
- Performance and complexity: Evaluate and document the balance between algorithm complexity and expected performance, considering the impact on computational resources and implementation feasibility.
- Computational efficiency: Assess and choose algorithms based on their training duration, prediction speed, and memory requirements, ensuring compatibility with existing computational resources to avoid operational bottlenecks.
- Efficiency and scalability: Select algorithms that demonstrate efficient scalability in handling increasing volumes and complexities of data, maintaining performance without significant degradation.
- Robustness: Prioritise the selection of algorithms that are robust and error-tolerant under varying operational conditions and data inputs.
- Explainability: Opt for algorithms that provide sufficient explainability of decisions, particularly in regulated industries, to fulfil transparency requirements and facilitate easier interpretation of model outcomes.
- Ethical and legal considerations: Ensure that all algorithms undergo a bias assessment and are chosen based on their ability to promote fairness and comply with applicable legal standards.
- Accountability: Maintain comprehensive documentation of the algorithm selection process, including justifications for decisions made, to support accountability and facilitate review processes.
4. Computational Resources
When developing AI models, selecting the appropriate hardware and infrastructure is crucial not only to meet the computational demands but also to optimise performance and ensure data privacy.
- Hardware and infrastructure specifications: Evaluate and select between edge computing, cloud computing, or on-premise solutions based on the specific computational needs, budget constraints, and real-time processing requirements of the AI project. Ensure that the chosen infrastructure can efficiently support the expected load and scalability demands.
- Data privacy and security measures: Implement end-to-end encryption for all data in transit and at rest. Establish robust access control measures, including multi-factor authentication and role-based access controls, to ensure only authorised personnel can access sensitive data. Define and adhere to strict data handling and storage protocols to comply with relevant data protection regulations.
5. Model Development and Testing
Developing an AI model requires a thorough approach to ensure it is effective, robust, reliable, and ethically compliant. Addressing these key requirements is essential for reducing risks in AI implementations and developing models for practical use.
- Model development
- Identify critical features: Systematically identify and engineer key features from the data that significantly enhance the model's predictive capabilities.
- Algorithm development:
- Customise algorithms: Develop or tailor algorithms specifically to meet the unique demands of the project.
- Optimise performance: Enhance algorithm efficiency to handle diverse datasets and maintain optimal performance across various scenarios.
- Privacy and oversight:
- Implement Privacy Enhancing Technologies: Incorporate technologies such as differential privacy, which adds random noise to datasets, to safeguard individual privacy.
- Ensure human oversight: Integrate continuous human monitoring during both the development and testing phases to validate the AI's decisions and correct errors or biases early in the process.
- Testing
- Robust testing framework: Establish a comprehensive testing strategy that includes the following:
- Develop comprehensive testing strategies: Establish a robust testing framework that includes stress testing and adversarial testing to assess the model's resilience and reliability under challenging conditions.
- Test for edge cases: Ensure the model is capable of handling unusual or unexpected inputs effectively.
- Verification and validation:
- Verification: Confirm that the model meets all initial specifications.
- Validation
- Implement validation techniques: Use rigorous validation methods, such as k-fold cross-validation, to evaluate the model's performance on unseen data and ensure it does not overfit.
- Performance metrics: Apply various metrics, including accuracy, precision, recall, F1 score, and area under the ROC curve, to thoroughly evaluate the model's performance.
- Model generalisation: Test the model with a separate validation set to verify its effectiveness on new, unseen data.
- Robust testing framework: Establish a comprehensive testing strategy that includes the following:
- Ensure ethical and legal compliance: Regularly review the model against current ethical standards and legal regulations, such as GDPR, to prevent biases and guarantee compliance.
6. System Integration and Deployment
System integration and deployment ensure that the developed models work seamlessly with existing systems and are successfully implemented in production contexts. This includes clearly communicating the model's decisions and behaviours to end users. This clarity is required for end users to properly interact with and trust the new model.
- Human oversight: The system design must include features for human intervention and oversight after deployment. This is crucial to address and mitigate any issues or errors with the AI system once it is live.
7. Monitoring, reporting, and maintenance
Effective monitoring, reporting, and maintenance are fundamental to maintaining the operational integrity and compliance of any AI system. These processes are not only essential for ensuring long-term reliability and effectiveness but are also critical for meeting regulatory requirements and managing risks. Key activities are:
- Performance monitoring and optimisation: Continuous evaluation and optimisation of the system based on predefined metrics and user feedback.
- Continuous human oversight: Assign qualified personnel to monitor the system continuously to ensure it operates within designated parameters and adheres to all ethical guidelines. Identify critical stages requiring enhanced human oversight.
- Documentation: Maintain comprehensive records of the system development process, including configurations and training data. Document all updates and modifications to the system, ensuring records are up-to-date and accurately reflect current operations.
- Reporting: Generate regular reports detailing system performance, decision-making processes, and any incidents of non-compliance. Ensure that all reports are accessible for audit purposes and comply with relevant legal and regulatory frameworks.
Key Challenges in Managing AI Project Requirements
Requirement management in AI projects presents unique challenges due to the complex nature of AI technologies, dynamic development environments, and specialised demands. These challenges include dealing with ambiguous and incomplete requirements, trade-offs between requirements, frequent scope changes, communication barriers across diverse teams, significant data-related issues, and difficulties integrating AI with existing systems. Additionally, ethical and legal considerations, the high technical complexity, over-reliance on AI capabilities, and resistance to organisational change further complicate the management process. These factors underscore the necessity for robust, adaptive, and clear requirement management strategies specifically designed for AI projects to ensure their successful implementation and adherence to ethical and regulatory standards.
Example Trade-Offs Between Requirements
In AI projects, like autonomous vehicles, there's a classic trade-off between accuracy and speed. Achieving high accuracy in object recognition is crucial for safety, but it can slow down processing speed. Product managers must find the right balance between accuracy and speed to ensure timely decisions on the road. This illustrates the challenge in AI development: improving one aspect can compromise another. Achieving balance requires understanding the application's needs, AI model capabilities, and hardware limitations.
Conclusion
By structuring requirements into the life cycle of an AI project, we have a deeper understanding of what a project and AI system must meet. This approach reduces risks while providing effective, long-term AI solutions. In reality, this will be an iterative process in which we frequently begin with a so-called baseline model.
Are you ready to unleash the full potential of artificial intelligence for your organisation? Look no further. Our consulting service is your personal guide through the complexity of AI project management and execution. With expertise spanning every phase, we're here to ensure your AI initiatives soar to new heights. Contact us today to transform your AI aspirations into tangible outcomes.