The toxicity model detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language.

Output is directed to console.log - use Inspect to view console.log.

Toxicity Classifier on Github