← Back to Cru

POLICY 5: AI ETHICS & MODERATION POLICY

POLICY 5: AI ETHICS & MODERATION POLICY
Transparency Version
5.1 Purpose
5.1.1 CRU uses artificial intelligence ("AI") and automated tools to support content moderation, abuse prevention, and platform integrity.
5.1.2 This Policy explains:
	•	(a) How AI is used
	•	(b) Where human oversight exists
	•	(c) What safeguards are implemented
	•	(d) How users may challenge automated decisions
5.1.3 This Policy aligns with:
	•	(a) Information Technology Act, 2000
	•	(b) Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
	•	(c) Digital Personal Data Protection Act, 2023
5.2 Role of AI on CRU
5.2.1 AI systems are used to assist in:
	•	(a) Detecting policy violations
	•	(b) Identifying hate speech, incitement, or harmful Content
	•	(c) Providing contextual information and evidence-based discussion support
	•	(d) Detecting spam, bots, or coordinated manipulation (including reputation gaming)
	•	(e) Identifying political advertisements (where applicable)
	•	(f) Calculating Civic Credibility Scores and ranking signals
	•	(g) Prioritizing moderation review queues
5.2.2 AI assists moderation and supports civic discourse quality — it does not replace human judgment in complex or sensitive cases.
5.3 Automated Decision-Making
5.3.1 Certain automated actions may include:
	•	(a) Temporary content flagging
	•	(b) Reduced content visibility
	•	(c) Account risk scoring
	•	(d) Queue prioritization for review
5.3.2 Fully automated permanent bans are not implemented without human oversight.
5.4 Human Oversight
5.4.1 CRU maintains human review processes for:
	•	(a) Appeals
	•	(b) Complex political content
	•	(c) Context-sensitive decisions
	•	(d) Legal compliance matters
5.4.2 Users may request review by contacting: grievance@cruvels.com
5.5 Political Neutrality Commitment
5.5.1 CRU does not use AI to promote or suppress specific political ideologies.
5.5.2 AI systems are designed to:
	•	(a) Enforce behavioral rules
	•	(b) Detect harmful conduct
	•	(c) Identify unlawful material
5.5.3 Moderation decisions are based on rule violations, not political alignment.
5.6 Bias Mitigation
5.6.1 CRU implements the following safeguards:
	•	(a) Regular review of moderation outcomes
	•	(b) Manual override capability for AI decisions
	•	(c) Escalation for high-impact or sensitive decisions
	•	(d) Internal review of politically sensitive or controversial Content actions
	•	(e) Civic Credibility Score audits to ensure content-neutral operation
5.6.2 No system is entirely free from bias or error, but continuous evaluation, transparency, and improvement are maintained.
5.7 Transparency in Enforcement
5.7.1 Users may:
	•	(a) Receive notice of content removal (where appropriate)
	•	(b) Appeal moderation decisions
	•	(c) Request clarification on enforcement
5.7.2 CRU may publish aggregate transparency reports summarizing moderation activity.
5.8 Limitations of AI Systems
5.8.1 AI systems may:
	•	(a) Misinterpret sarcasm
	•	(b) Misclassify context
	•	(c) Incorrectly flag lawful content
5.8.2 CRU encourages users to appeal if errors occur.
5.9 Data Usage in AI Systems
5.9.1 AI moderation tools may process:
	•	(a) Publicly posted content
	•	(b) Account metadata
	•	(c) Behavioral patterns
5.9.2 AI systems do not access private device data or unrelated personal information.
5.9.3 Data processing complies with CRU Privacy Policy.
5.10 No Automated Political Manipulation
5.10.1 CRU does not:
	•	(a) Use AI to manipulate voter behavior or electoral outcomes
	•	(b) Use AI to micro-target Users based on inferred political ideology or beliefs
	•	(c) Sell political behavioral profiles or User data to political entities
	•	(d) Use Civic Credibility Scores to favor or suppress political viewpoints
5.10.2 Sponsored political content (where applicable) is labeled, disclosed, and regulated separately under the Political Advertising & Sponsored Content Policy.
5.11 Regulatory Cooperation
5.11.1 CRU may:
	•	(a) Provide information to lawful authorities
	•	(b) Adjust AI systems to comply with evolving regulations
	•	(c) Update policies as required by law
5.12 Continuous Improvement
5.12.1 CRU commits to:
	•	(a) Updating moderation systems responsibly
	•	(b) Monitoring emerging risks
	•	(c) Improving fairness and transparency
5.13 Policy Updates
5.13.1 This Policy may be updated periodically.
5.13.2 Continued use of CRU constitutes acceptance of revisions.