Large Language Models

Bielik Guard: Efficient Polish Language Safety Classifiers for LLM Content Moderation

KKrzysztof WróbelJJan Maria KowalskiJJerzy SurmaIIgor CiuciuraMMaciej Szymański
Published
February 8, 2026
Authors
5
Word Count
6,380
Code
Includes code

Polish safety classifier using community annotations for culturally-aware LLM content moderation.

Abstract

As Large Language Models (LLMs) become increasingly deployed in Polish language applications, the need for efficient and accurate content safety classifiers has become paramount. We present Bielik Guard, a family of compact Polish language safety classifiers comprising two model variants: a 0.1B parameter model based on MMLW-RoBERTa-base and a 0.5B parameter model based on PKOBP/polish-roberta-8k. Fine-tuned on a community-annotated dataset of 6,885 Polish texts, these models classify content across five safety categories: Hate/Aggression, Vulgarities, Sexual Content, Crime, and Self-Harm. Our evaluation demonstrates that both models achieve strong performance on multiple benchmarks. The 0.5B variant offers the best overall discrimination capability with F1 scores of 0.791 (micro) and 0.785 (macro) on the test set, while the 0.1B variant demonstrates exceptional efficiency. Notably, Bielik Guard 0.1B v1.1 achieves superior precision (77.65%) and very low false positive rate (0.63%) on real user prompts, outperforming HerBERT-PL-Guard (31.55% precision, 4.70% FPR) despite identical model size. The models are publicly available and designed to provide appropriate responses rather than simple content blocking, particularly for sensitive categories like self-harm.

Key Takeaways

  • 1

    Bielik Guard uses community annotations to build culturally-aware Polish language safety classifiers that outperform multilingual models.

  • 2

    The model treats annotator disagreement as informative signal rather than noise, preserving ambiguity through soft labels during training.

  • 3

    Over 60,000 annotations from 1,500 Polish volunteers created a dataset capturing language-specific nuances in hate speech and harmful content.

Limitations

  • Dataset excludes disinformation and jailbreaking categories requiring factual knowledge and broader context beyond single text snippets.

  • Safety classification remains subjective; model performance depends on threshold selection balancing harmful content detection against false positives.

Keywords

Large Language Modelscontent safety classifiersMMLW-RoBERTa-basePKOBP/polish-roberta-8kfine-tuned modelsF1 scoresprecisionfalse positive rate

More in Large Language Models

View all
Bielik Guard: Efficient Polish Language Safety Classifiers for LLM Content Moderation | Paperchime