June 25, 2025

Why AI Ethics Matter for Impact Brands

Artificial intelligence (AI) is revolutionizing how we work, but for B Corps and sustainability-focused organizations, the stakes are higher. Especially when developing an Ethical AI Policy that aligns with core values. When your mission is rooted in equity and transparency, your AI use must reflect that same integrity. It should not focus on how to replace human labor with AI, but rather, it should give employees access to tools to make their contributions more equitable.

Here’s how an organization can build an Ethical AI Policy that meets the moment.

Step 1: Ground Your Ethical AI Policy in Purpose and People

Every AI policy should start with the question: What values are we protecting? Organizations should frame AI as a tool, not a replacement—one that supports mission-driven work without diminishing the human contribution.

Pro Tip: If your organization has a certification such as 1% For The Planet, B Corp, Women Owned Business, Fairtrade, etc., use the principles from that framework to shape your AI policy – focusing on equity, sustainability, and transparency.

Example: A sustainability-focused nonprofit should create a checklist to evaluate how any AI use would align with its community impact goals and mission values.

Step 2: Address Equity and Bias from the Outset

AI systems have a bias problem. For values-led companies, it’s essential to explicitly address and mitigate this in your AI use. Equity-seeking groups, Indigenous communities, and historically marginalized populations must be prioritized in your ethical assessments.

Social Impact AI Policy Tip: Examine how AI-generated outputs may impact representation in visuals, messaging, or data interpretation.

Example: A climate tech firm paused all AI-generated image use until they could confirm bias mitigation in demographic prompts.

Step 3: Prioritize Transparency and Attribution

An Ethical AI Policy should reinforce that ethical use of AI requires openness. Organizations should make clear, internally and externally, when AI has supported content creation or research. No AI-generated content should be used without human review and acknowledgment.

Example: A consulting firm includes AI contribution notes in their project files and ensures every public deliverable includes human validation. (ie, “This article was drafted with 80% human lead content, 20% AI generated content.”)

Step 4: Ensure Human Oversight and Ethical Judgment

AI should augment – not replace – strategic and ethical judgment. Require at least two team members to review any content that integrates AI, particularly before it’s shared externally.

Example: A mission-driven health brand added a mandatory double-review system to screen AI-assisted communications for tone, accuracy, and community sensitivity.

Step 5: Safeguard Data, Privacy, and Proprietary Knowledge

Strong data privacy and protection measures must be enforced. Avoid inputting sensitive client or stakeholder data into open AI tools, and only use platforms that meet your privacy standards. For example, paid enterprise tools like Microsoft Azure OpenAI or private instances of Anthropic’s Claude offer data encryption and do not retain user input. In contrast, free versions of AI platforms like ChatGPT or Google Gemini may store prompts and are not suitable for any content containing sensitive or proprietary data.

Tip for B Corps: Align your AI data practices with your existing governance and transparency commitments.

Example: A social enterprise used placeholder terms in their prompt engineering and confined prompt history to a closed, internal environment.

Step 6: Track and Offset the Carbon Footprint of AI

AI’s environmental cost is real. Organizations committed to climate action should include AI-related energy use in their sustainability tracking and offset those emissions annually.

Carbon Impact of AI Tip: Add AI emissions to your environmental impact reporting and consider offsetting through certified partners.

Example: One impact consultancy estimated its AI-driven emissions based on usage hours and carbon intensity, then offset through a verified tree planting initiative.

Step 7: Keep the Policy a Living Document

An Ethical AI Policy must evolve alongside the tools it governs. Build in regular review cycles and invite staff to share observations and ethical concerns about AI use.

Example: A foundation revised its AI policy quarterly, based on employee input and new legal standards around data use and copyright.

 

Curious about how ethical AI can align with your brand values? Let’s talk. You can also read Yulu’s own AI Policy to see how we approach equity, transparency, and climate impact in our use of technology.