Last Updated: 19 January 2026
Our Commitment to Responsible AI
The Parkfield Collective (a trade name of Parkfield Commerce, operating under Vellum Technologies Ltd - Teicneolaíochtaí Vellum Teoranta and its subsidiaries) believes that artificial intelligence and machine learning technologies should be developed and deployed responsibly, ethically, and transparently. As we integrate AI into our services and operations,
we commit to upholding the highest standards of ethical practice.
We are pursuing ISO 42001 certification (AI Management System) to demonstrate our commitment to responsible AI governance.
Governance Structure
AI governance and oversight is provided by:
AI Governance Officer; Data Protection Officer; Executive Accountability: Alex Zachman, CEO Vellum; Security and Compliance Committee: Provides oversight of all AI systems and practices
Our AI Principles
1. Transparency and Explainability
We believe people have the right to understand how AI systems work and affect them.
- We clearly disclose when AI is being used in our services or on our website
- We explain how AI systems make decisions that impact clients or users
- We provide documentation on AI model capabilities and limitations
- We avoid "black box" implementations where decisions cannot be explained
In Practice:
- When we use AI for automation, analytics, or recommendations, we document the logic and reasoning
- Clients receive clear explanations of AI-driven outputs
- We maintain human oversight for significant decisions
- We provide mechanisms to question or challenge AI-generated results
- Our website AI chatbot (powered by Mistral AI) clearly identifies itself as AI
2. Privacy and Data Protection
We protect personal data and respect individual privacy in all AI applications.
- AI systems are designed with privacy by default and by design
- We minimise data collection to what's necessary for specific purposes
- We never train AI models on client data without explicit consent
- We implement strong data governance and security measures
In Practice:
- We anonymise or pseudonymize data used for AI training where possible
- We maintain clear data lineage and usage logs
- We comply with GDPR, PIPEDA, CCPA, and other privacy regulations
- We conduct Data Protection Impact Assessments (DPIAs) for high-risk AI applications
- We provide opt-out mechanisms where feasible
- Chatbot conversations are retained for 90 days for quality improvement, then anonymised
- We leverage the Proton ecosystem for encrypted AI-related communications involving PII
3. Fairness and Non-Discrimination
We actively work to prevent AI systems from perpetuating bias or discrimination.
- We assess AI systems for potential bias across protected characteristics
- We test models with diverse datasets and scenarios
- We monitor AI outputs for discriminatory patterns
- We prioritise fairness alongside accuracy in model evaluation
In Practice:
- We conduct bias audits during development and deployment
- We use diverse, representative datasets for training
- We implement fairness constraints in model design
- We provide mechanisms to contest or appeal AI-driven decisions
- We regularly review AI outputs for unintended discriminatory effects
- We leverage the Proton ecosystem for encrypted AI-related communications involving PII
4. Human Agency and Oversight
Humans should remain in control of AI systems and their outcomes.
- AI augments human decision-making rather than replacing it entirely
- Critical decisions always involve meaningful human review
- Users maintain the ability to override or appeal AI outputs
- We design systems that empower people, not replace them
In Practice:
- We implement human-in-the-loop processes for high-stakes decisions
- We provide clear escalation paths from AI to human support
- We train client teams to understand and effectively use AI tools
- We maintain kill switches and override mechanisms
- We document when and how human oversight occurs
- AI chatbot clearly offers option to connect with human support
5. Accountability and Governance
We take responsibility for the AI systems we build and deploy.
- We maintain clear ownership and accountability for AI systems
- We establish governance structures for AI oversight
- We regularly audit AI systems for compliance and ethical alignment
- We have incident response procedures for AI failures or harms
In Practice:
- We assign accountability for each AI system to specific individuals
- We conduct regular AI ethics reviews and audits
- We maintain comprehensive documentation and audit trails
- We have established procedures for addressing AI-related incidents
- We investigate and respond to concerns about AI systems
- Security and Compliance Committee provides AI governance oversight
6. Safety and Security
AI systems must be robust, secure, and safe for their intended purposes.
- We test AI systems rigorously before deployment
- We implement security measures to prevent adversarial attacks
- We monitor systems continuously for anomalous behaviour
- We maintain fail-safe mechanisms for critical applications
In Practice:
- We conduct security assessments specifically for AI components
- We implement adversarial testing and red team exercises
- We maintain version control and rollback capabilities
- We monitor AI system performance and degradation
- We have incident response plans for AI failures
- We use encrypted communication (Proton ecosystem) for AI development and deployment
7. Environmental Responsibility
We consider the environmental impact of AI systems.
- We optimise AI models for computational efficiency
- We select energy-efficient infrastructure where possible
- We balance model performance with environmental cost
- We consider carbon footprint in AI deployment decisions
In Practice:
- We use model compression and optimisation techniques
- We leverage serverless and auto-scaling infrastructure
- We monitor and report on AI system energy consumption
- We choose data centres with renewable energy commitments
- We avoid unnecessarily large or complex models
How We Use AI
In Our Services We use AI to enhance our digital transformation and commerce services:
AI-Powered Automation
- Customer service chatbots and virtual assistants
- Procurement workflow automation
- Marketing sequence optimisation
- Inventory and demand forecasting
- Order processing and routing
AI-Enhanced Analytics
- User behaviour analysis and session replay insights (PostHog)
- Conversion rate optimisation recommendations
- Predictive analytics for business intelligence
- Anomaly detection in systems and data
- Performance monitoring and alerting
AI-Assisted Development
- Code generation and review assistance
- Testing and quality assurance automation
- Documentation generation
- Accessibility compliance checking
- Security vulnerability detection
Transparency: We clearly identify which deliverables include AI components and document their capabilities and limitations.
On Our Website We use AI in the following ways:
- PostHog Analytics: Session replay with AI-powered insights into user behaviour
- AI-Powered Chatbot: Virtual assistant for basic inquiries (powered by Mistral AI)
- Content Recommendations: AI-suggested resources based on browsing behaviour
- Accessibility Tools: AI-assisted accessibility checking and remediation
What We Don't Do With AI
- No Hidden AI: We don't use AI to make decisions about people without disclosure
- No Surveillance: We don't use AI for invasive monitoring or tracking
- No Manipulation: We don't use AI to manipulate user behaviour unethically
- No Unauthorised Training: We don't train AI models on client data without permission
- No Sale of AI Insights: We don't sell insights derived from AI analysis of user data
- No Automated Discrimination: We don't use AI to make decisions that discriminate based on protected characteristics
AI Governance Framework
We maintain an AI governance framework that includes:
Risk Assessment
- Classification of AI systems by risk level (low, medium, high)
- Impact assessments before deployment
- Ongoing monitoring and evaluation
- Regular audits and compliance checks
- Documentation of risks and mitigation strategies
Documentation Standards
- AI system specifications and capabilities
- Data sources and training methodologies
- Model performance metrics and limitations
- Testing and validation procedures
- Incident response protocols
- Version control and change logs
Ethics Review Process
- Ethics review for new AI implementations
- Stakeholder consultation for high-risk systems
- Regular ethics audits of existing systems
- Continuous improvement based on feedback
- Security and Compliance Committee review for all AI deployments
Compliance Monitoring
- Alignment with ISO 42001 standards
- GDPR, PIPEDA, and CCPA compliance for AI systems
- Industry-specific regulatory compliance
- Adherence to emerging AI regulations (EU AI Act, etc.)
- Regular compliance assessments and reporting
ISO 42001 Certification
We are pursuing ISO 42001 certification (AI Management System), which demonstrates our commitment to:
- Systematic AI governance and oversight
- Risk-based approach to AI deployment
- Continuous improvement in AI practices
- Stakeholder engagement and transparency
- Compliance with legal and ethical requirements
Current Status: Expected certification 2028
Third-Party AI Tools
When we use third-party AI tools or services, we:
- Conduct due diligence on vendor AI ethics and practices
- Review vendor compliance with privacy and AI regulations
- Evaluate algorithmic transparency and explainability
- Assess vendor incident response and accountability
- Maintain contractual protections for data and ethical use
- Monitor vendor practices on an ongoing basis
AI Vendors We Use:
- Mistral AI: Chatbot functionality on our website
- PostHog: AI-powered analytics and session insights
- Additional vendors listed in our Subprocessor List
Your Rights Regarding AI
You have the right to:
- Know when AI is being used to make decisions that affect you
- Receive an explanation of AI-driven outputs or recommendations
- Contest or appeal decisions made by AI systems
- Opt out of AI-based processing where feasible
- Request human review of AI-generated decisions
- Access your data used in AI systems (subject to technical feasibility)
- Be informed of the logic, significance, and consequences of AI processing
- Escalate from AI chatbot to human support
To exercise these rights: Contact us at ai-ethics@parkfieldcollective.com
Continuous Learning and Improvement
We recognise that AI ethics is an evolving field. We commit to:
- Staying informed on emerging AI ethics research and best practices
- Participating in industry discussions on responsible AI
- Regular training for our team on AI ethics and governance
- Updating our practices as technology and standards evolve
- Learning from incidents and near-misses
- Engaging with stakeholders on AI concerns
- Publishing transparency reports on AI use
- Security and Compliance Committee quarterly reviews of AI practices
Reporting AI Concerns
If you have concerns about our use of AI or encounter issues with AI systems:
AI Ethics Contact: ai-ethics@parkfieldcollective.com
What to include:
- Description of the concern or issue
- Where/when you encountered it (e.g., website chatbot, service delivery)
- Impact or potential harm
- Suggestions for improvement (if any)
We will:
- Acknowledge receipt within 2 business days
- Investigate concerns thoroughly
- Provide a substantive response within 10 business days
- Take corrective action where appropriate
- Report back on actions taken
Ultimate accountability for AI systems deployed by Parkfield rests with:
AI Governance Officer: (Position to be appointed)
Data Protection Officer: (Position to be appointed)
Executive Accountability: Alex Zachman, CEO Vellum
Governance Oversight: Security and Compliance Committee
Transparency Reporting
We commit to publishing an annual AI Transparency Report that includes:
- Overview of AI systems deployed
- Aggregate metrics on AI performance and incidents
- Updates on ethical AI initiatives and improvements
- Progress toward ISO 42001 certification
- Stakeholder feedback and actions taken
- Changes to AI governance practices
Latest Report: To be published 2027
Contact Information
AI Ethics Inquiries:
Email: ai-ethics@parkfieldcollective.com
General Contact:
The Parkfield Collective
c/o Vellum Technologies Ltd
Pollexfen House, Wine Street
Sligo, F91 A3FD
Ireland
This Ethical AI Statement reflects our current practices and commitments. It will be reviewed and updated at least annually or as practices evolve.