Skip to main content
IperChat
IT
Log in
AI Act AI chatbot GDPR AI transparency SME compliance EU regulation digital assistant Italian AI law Article 50 Law 132/2025

AI Act, August 2, 2026: is your chatbot compliant? (the practical guide)

IperChat ·
Read in Italian
Calendar with August 2, 2026 highlighted in green next to a chatbot widget showing the notice "You're interacting with an AI system". Below, a five-point practical checklist with concrete actions to take for AI Act compliance.

August 2, 2026 is less than four months away. It's the date when the most important obligations of the AI Act — the European regulation on artificial intelligence — become enforceable, and specifically the ones that concern anyone with a chatbot on their website.

If you search for what you need to do, you'll find articles full of acronyms: "Article 50", "Annex III", "GPAI", "deployer" vs. "provider", "dual compliance" with Italian Law 132/2025. The result is that many small business owners close the tab thinking: I need a lawyer just to find out whether that chatbot on my site is going to get me in trouble.

The good news is that for the vast majority of websites — a dental practice, a restaurant, a hotel, a small e-commerce shop — there are only a few concrete things to do. You don't need a legal department. You need to understand which rules actually apply to your case, which don't, and what to change before August.

That's what this guide is for.

What changes on August 2, 2026

Parts of the AI Act are already in force. Since February 2, 2025, certain practices have been banned — mass biometric surveillance, subliminal manipulation, social scoring. Since August 2, 2025, obligations apply to general-purpose AI models (ChatGPT, Claude, Gemini and similar), but those are on the companies that build them, not the ones that use them.

August 2, 2026 is different. This is the deadline that affects you, if you have an AI assistant on your site. Two categories of obligations become enforceable: those for "high-risk" systems (specific sectors: health, education, hiring, credit, justice) and the transparency obligations set out in Article 50.

For a chatbot on a local business website, Article 50 is what matters. And it's a lot simpler than it sounds.

The four obligations that apply to a chatbot

Article 50 of the AI Act imposes four concrete things. Let's take them one by one, applied to the case of a chatbot on a business website.

1. The user must know they're talking to an AI

This is the baseline rule. When a visitor opens the chat on your site, they must be informed that on the other side there's not a person but an artificial intelligence system. No treatise needed: a clear welcome message is enough, something like "Hi, I'm the AI assistant for [Business Name]. How can I help you?".

The regulation includes an exception: the notice is not required where it's "obvious from the point of view of a reasonably well-informed, observant and circumspect person." In practice, if your chat is called "HotelBot" and speaks about your business in the third person, it's probably already obvious that it's not a human. But the explicit notice is the safer bet — and it costs nothing.

2. The information must be clear and timely

The notice must appear at the moment of the first interaction, not buried in a legal page or the footer. An opening message in the chat, a visible label on the widget, or a line in the welcome tooltip are all acceptable.

The regulation also requires the information to be accessible. If your site is visited by people with disabilities, the message must be readable by screen readers and have enough contrast. In practice, following the WCAG guidelines — by now a standard for well-built websites — covers this requirement too.

3. AI-generated content must be identifiable

This is the point that causes the most confusion. Article 50 states that "content generated or manipulated by AI must be clearly marked and detectable as such."

For a conversational chatbot that answers questions about services, hours, and products, the requirement is already met by the first obligation: if the user knows they're talking to an AI, they know the answers are AI-generated.

It gets more delicate if you use AI to produce articles, product descriptions, or marketing emails that you publish as if they were written by a person. In that case, if the text concerns topics of "public interest" — news, health, politics, public services — the regulation requires an explicit label. For commercial copy about a product, the obligation doesn't directly trigger, but the European Commission is publishing a Code of Practice (first draft December 2025, final version expected in June 2026) that will spell out the technical marking details.

4. If you use AI for deepfakes or emotion recognition, you have to say so

This is the fourth obligation, and it almost certainly doesn't apply to you. Article 50 requires additional disclosures if your system recognizes emotions or performs biometric categorization (for example, analyzing a user's mood via webcam), or generates or manipulates images, audio, or video depicting real people (deepfakes).

A text chatbot that answers questions does none of this. If your AI is limited to chatting in text or reading your business documents to answer, these two obligations don't apply.

What you do NOT have to do (the sigh of relief)

When you read the AI Act for the first time, the impression is that you need hundreds of pages of technical documentation. That's true for "high-risk" systems — those used in healthcare, justice, schools, hiring, credit. For a chatbot that answers questions about a business's services, it doesn't apply.

Specifically, if your AI assistant is limited to answering frequently asked questions about your products or services, providing hours, directions, contacts, capturing the email of someone who wants to be contacted, or handling simple bookings, you don't fall into the "high-risk" category. You don't have to do impact assessments, register in EU databases, or produce documented risk analysis. You only have to meet the four transparency obligations we just covered.

The Italian context: Law 132/2025

There's a second layer to account for, if your business is based in Italy. Law 23 September 2025, no. 132, which took effect on October 10, 2025, is the first Italian framework law on AI. It doesn't replace the AI Act — it complements it, filling the gaps the European regulation leaves to Member State discretion.

What does it add, in practice, for someone running a chatbot?

The competent authorities in Italy are AgID (Agenzia per l'Italia Digitale), which acts as notifying authority, and ACN (the National Cybersecurity Agency), with market surveillance and sanctioning powers. The Italian Data Protection Authority (Garante Privacy) remains competent on personal data matters — which are unavoidable in a chat, since everything a user types is potentially personal data.

For SMEs, the law provides simplified procedures, discounts on certification fees, and priority access to regulatory sandboxes — controlled environments where new AI solutions can be tested before launch. These protections exist precisely to keep the compliance burden from crushing small businesses.

The term "dual compliance" — which you'll often see in articles — simply means you have to comply with both the European AI Act and Italian law. For a standard chatbot on a business website, the two frameworks converge on the same practical obligations.

Penalties: what you're really risking

When the AI Act comes up, headlines mention fines of up to 35 million euros. That figure is real, but it applies to the most serious violations — those related to prohibited practices, not to your chatbot's transparency.

The Article 99 sanctioning system has three tiers: up to €35 million or 7% of worldwide turnover for breaches of the absolute prohibitions, up to €15 million or 3% of turnover for breaches of other obligations (including Article 50 transparency), and up to €7.5 million or 1% for false or incomplete information provided to authorities.

For an SME, the regulation explicitly provides that the lower amount applies (the fixed figure, not the percentage of turnover). And more than that: authorities must assess sanctions "proportionately, taking into account the economic sustainability" of small businesses.

This doesn't mean that a business failing to disclose AI gets a free pass. It means the real risk for a café that puts up a chatbot without an AI disclosure isn't a €15 million fine — it's an order from the authorities to come into compliance within a set period, and in case of repeated failure, a sanction proportionate to the actual harm and the size of the business.

The real risk for small businesses isn't the punitive fine. It's the reputational cost of being flagged as non-compliant at a time when customers are starting to pay careful attention to how businesses handle AI.

The five-point checklist

Translating all of this into concrete actions to take before August 2, 2026 is simpler than it sounds. Here are the five things to check about your chatbot.

One. Make sure the chat's opening message explicitly states it's an AI assistant. A line like "Hi, I'm the AI virtual assistant for [name]" is enough. Avoid ambiguous phrasings that could lead someone to think they're talking to a human operator.

Two. Check where the conversation data is stored. The AI Act is an EU regulation, and it interacts directly with the GDPR. If your provider stores data in Europe and gives you a DPA (Data Processing Agreement) compliant with Article 28 of the GDPR, you're covered. If the data goes to the United States without guarantees, you have a problem that exists today — you won't have to wait until August to face it.

Three. Read your chatbot's system prompt and ask your provider whether you can edit it. You have to be able to tell the AI what not to say: no diagnoses, no specific legal advice, no making up schedules, no promising appointments without verification. A chatbot that hallucinates on sensitive topics is a concrete risk, and Article 50 doesn't protect you if the problem is answer reliability.

Four. If you use AI to generate content published on the site (blog posts, product descriptions, newsletters) and that content deals with public-interest topics, plan a documented human editorial review. Article 50 exempts from the labeling obligation when there is "human editorial control" and "editorial responsibility" by a natural or legal person. Tracking who reviews what, even just with a simple sign-off at the bottom of each post, takes the problem off the table.

Five. Mark the upcoming milestones of the Code of Practice on marking AI-generated content. The first draft was published on December 17, 2025; a second version is expected by March 2026, and the final version in June, shortly before the deadline. If your provider takes the regulation seriously, they'll adjust their tools automatically. But it's useful to keep a folder (physical or digital) with your provider's compliance communications — in the rare event of an inspection, demonstrating diligence goes a long way.

It's not just compliance — it's trust

The AI Act is the world's first regulation to bring order to a domain that until recently was uncharted territory. It's imperfect, at times cumbersome, and criticized by those who think it will slow innovation. But its core principle — that people interacting with an AI should know it, and that content produced with an AI should be declared as such — is hard to argue with.

For a small Italian business, there's something that matters more than the fear of sanctions. The customer arriving on your site in 2026 is a customer who by now is familiar with ChatGPT, voice assistants, AI-generated answers. And they're increasingly attentive to who uses their image, their words, their data — and how.

A transparent chatbot, one that declares what it is, uses a European provider compliant with the GDPR, gives precise answers, and doesn't make up what it doesn't know, is a trust asset. It's not a compliance cost: it's a concrete reassurance, at the first message, that your business chose to do things the right way.

The August 2, 2026 deadline isn't a guillotine. It's a maturity date: the moment when how you use AI in your business stops being a private bet and becomes part of the trust contract with your customers.


Want to see how an AI Act-compliant assistant with data stored in Europe would work on your site? Paste your URL on iperchat.ai and try it in 30 seconds — free, no registration.