§ 01Purpose
This policy sets out how LumynaX (Te Mārama) is permitted to be used. It applies to every customer, every deployment, and every user of the Service — alongside our Terms of Service and any enterprise agreement.
§ 02Principles
- Dignity. Do not use the Service to demean, dehumanise, or systematically degrade any person or group.
- Accuracy. Do not pass model outputs off as fact without verification, especially in high-stakes contexts.
- Transparency. Tell people when they are interacting with an AI system, where the law or context requires it.
- Accountability. Keep a human in the loop for consequential decisions. Audit logs exist for a reason; use them.
§ 03Prohibited uses
You must not use LumynaX, directly or indirectly, to:
- Develop, enhance, or direct weapons of mass destruction, autonomous lethal systems, or malware intended to disable critical infrastructure.
- Generate sexual content involving minors, non-consensual intimate imagery, or content intended to harass a specific individual.
- Plan or coordinate violence, terrorism, or the systematic suppression of political dissent.
- Produce political-campaign deepfakes or synthetic media intended to deceive voters.
- Operate covert influence campaigns or manufacture grassroots-appearing content at scale.
- Make fully automated decisions in domains where law or ethics require human judgement — including medical diagnosis, judicial sentencing, immigration determinations, and employment dismissal.
- Scrape personal data, reconstruct training corpora, or extract model weights.
§ 04High-risk uses
The following domains require a signed high-risk deployment agreement and additional safety review before launch:
- Healthcare — diagnosis support, triage, patient communication.
- Legal — drafting that will be filed in court, regulatory submissions.
- Finance — credit decisions, anti-money-laundering, automated trading.
- Education — assessment of minors, admissions decisions.
- Government — benefits eligibility, border control, law enforcement.
§ 05Disclosure
When you deploy LumynaX in a user-facing context, you must disclose the AI's role clearly — either at the start of the interaction or in a persistent surface that a reasonable user would find. "Powered by LumynaX" is welcome but not sufficient for jurisdictions that require express AI disclosure.
§ 06Safety review
Enterprise deployments include a pre-launch safety review: threat modelling, red-team access, and a rollback plan. For consumer-facing products we may request evidence of safety testing, including how you handle jailbreak attempts, prompt injection, and minors who may bypass age gates.
§ 07Enforcement
Violations may result in rate limiting, suspension, or termination of access. We report illegal activity to the relevant authorities. We publish an annual transparency report summarising enforcement actions — anonymised where appropriate, named where the public interest requires it.
§ 08Report misuse
If you see LumynaX being used in a way that breaches this policy, tell us by email. We investigate every report and reply within 5 business days.
WEB ........ abteex.com
We do not retaliate against good-faith reporters. Retaliation against a reporter is itself a breach of this policy.