Acceptable Use Policy
Last updated: 18 April 2026 · Effective: 18 April 2026
This Acceptable Use Policy (“AUP”) is part of our
Terms of Service and governs your use of Burooj. It exists because
(a) we care about not letting our platform be weaponised, and (b) our upstream AI providers
require us to enforce their usage policies against our users. If you would not want to
explain what you are about to generate to a journalist, a judge, or a child-safety
investigator, do not generate it on Burooj.
Upstream policies you must also comply with:
1. You Must Not Use Burooj To…
1.1 Harm Children
Generate, solicit, store, or disseminate child sexual abuse material
(“CSAM”), child sexual exploitation content, sexual content depicting minors
whether real, fictional, or AI-generated, or any content that sexualises a minor.
Zero tolerance. Accounts are terminated immediately; fees are forfeit;
incidents are reported to the U.S. National Center for Missing & Exploited Children
(NCMEC) and/or equivalent authorities in your jurisdiction, and to our upstream AI
providers as contractually required.
1.2 Build Weapons or Critical-Infrastructure Attacks
- Design, synthesise, or acquire chemical, biological, radiological, nuclear, or high-yield explosive (CBRN) weapons; pathogens; gain-of-function research; delivery systems.
- Create or optimise tools for attacking critical infrastructure: power, water, financial systems, hospitals, transportation, telecom.
- Produce autonomous weapons systems or targeting software.
1.3 Build Cyberweapons and Malicious Software
- Generate malware, ransomware, spyware, stalkerware, credential stealers, keyloggers, rootkits, or bootkits.
- Generate exploit code or proof-of-concepts targeting specific live systems you do not have explicit authorisation to test.
- Generate tooling that performs, automates, or coordinates denial-of-service attacks.
- Generate phishing sites, fake sign-in pages, brand-impersonation pages, or romance/investment-scam scaffolding.
Ordinary defensive security work (hardening guides, CTF challenges on boxes you own,
educational demos of well-known vulnerabilities in an isolated lab) is allowed. Building
credible offensive tooling against real targets is not.
1.4 Run Deceptive, Fraudulent, or Illegal Schemes
- Deceptive commerce, pyramid schemes, pump-and-dump schemes, illegal financial products.
- Impersonation of real individuals, brands, government bodies, or public officials.
- Academic fraud, plagiarism-for-hire, ghost-writing services that violate the policies of the educational institution.
- Any activity unlawful in your jurisdiction, the jurisdiction from which you operate the resulting product, or the jurisdiction of end users you target.
1.5 Manipulate Elections and Public Discourse
- Mass-generate political messaging intended to deceive voters about candidates, voting procedures, election results, or polling locations.
- Build disinformation factories, inauthentic-amplification bot networks, or coordinated inauthentic campaigns.
- Create deepfakes of public officials, candidates, or ordinary people without their consent.
1.6 Violate Privacy and Build Surveillance Tooling
- Aggregate or infer personal information about specific individuals without a lawful basis and without their knowledge.
- Build untargeted facial-recognition systems, biometric identification in public space, or stalkerware.
- Non-consensual location tracking or communication interception.
- Doxing, hate-speech targeting, coordinated harassment infrastructure.
1.7 Produce Non-Consensual or Exploitative Content
- Non-consensual intimate imagery (including AI-generated) of real people.
- Content that sexualises, threatens, or dehumanises a protected class.
- Revenge-porn platforms, “sextortion” infrastructure.
1.8 Make High-Risk Automated Decisions Without Human Oversight
You may not use Burooj Output to build or operate systems that make automated decisions,
without appropriately qualified human review, in any of the following domains:
- Medical diagnosis, treatment, or dosing.
- Legal advice; adjudication of rights.
- Financial, tax, credit, insurance underwriting, or housing decisions.
- Employment decisions (hiring, firing, pay).
- Law enforcement, criminal sentencing, bail, or parole.
- Safety-critical control systems (aviation, nuclear, autonomous vehicles, medical devices).
These are precisely the domains in which GDPR Art. 22 and the EU AI Act treat automated
decisions as high-risk. If your product operates in one of them, you must put a qualified
human in the loop and tell your users.
1.9 Build a Competing Foundation Model
Do not use Burooj Output, logs, prompts, metrics, or any derivative thereof to train,
fine-tune, evaluate, or develop a machine-learning model that competes with the Service
or with any of our upstream AI providers. This reflects contractual obligations those
providers impose on us.
1.10 Infringe Intellectual Property or Violate Third-Party Rights
- Instruct Burooj to reproduce copyrighted code, assets, trademarks, or trade secrets you have no right to use.
- Circumvent digital-rights-management or technical-protection measures.
- Generate counterfeit goods, piracy storefronts, or unauthorised streaming platforms.
1.11 Abuse the Service
- Circumvent usage limits, rate limits, paywalls, or access controls.
- Operate multiple free-tier accounts; share accounts to evade limits; use automated sign-ups.
- Reverse-engineer, decompile, benchmark against, or extract models, prompts, or protected infrastructure.
- Probe the Service for vulnerabilities except under our Responsible Disclosure process.
2. Reporting Abuse
If you see misuse of the Service, tell us: abuse@burooj.ai.
For security vulnerabilities, email security@burooj.ai.
We acknowledge reports within 2 business days and do not retaliate against good-faith
reporters.
3. What Happens If You Violate This Policy
- Warning. For minor, first-time, good-faith violations we usually start with a warning and a request to remove the violating project.
- Build forfeiture. In-progress builds may be stopped and Output withheld. Fees already charged for the stopped build are not refunded where the violation is clear.
- Suspension / termination. Repeated or serious violations result in suspension or termination of your account. Unused wallet credit is refunded unless the violation involves fraud.
- Reporting. For violations of categories 1.1 (child safety), 1.2 (weapons/critical infrastructure), or clear criminal activity, we report to the relevant authorities and to our upstream AI providers as contractually required.
- No refund. Fees charged for completed, legitimate builds before a violation are not refunded on account closure for that violation.
4. Changes to This AUP
We may update this AUP when upstream provider policies change, when regulators issue new
guidance, or when we identify new categories of abuse. Material changes are announced
under Section 7 of the Terms of Service. The “Last updated” date at the top
reflects the most recent revision.
5. Contact