AI (Artificial Intelligence)
Computer systems that can perform tasks that normally need human intelligence, like understanding language, recognising images, or making suggestions.
Simple, practical rules for using Artificial Intelligence safely and responsibly – especially for non‑experts.
No forums. No comments. Just clear information on what to do, what not to do, and why.
Here, “AI” mainly means tools that can generate text, images, code, or audio from your requests (prompts).
They are not magic, and they are not people. They are programs trained on huge amounts of data to guess “the next best word (or pixel)”.
Different AI tools are good at different things. Some are like chatty assistants, some are specialised (for translation, coding, images, audio), and some are hidden “inside” apps you already use.
“Do my homework.”
“Explain photosynthesis like I am 12, using a short paragraph and a simple analogy, then give me 3 quiz questions I can answer.”
“Draft a polite email to a colleague to ask for an update on project X. I want it to sound friendly but professional. Then give me 2 shorter alternative versions.”
Many AI tools block these requests. Even if they don’t, you are still responsible for the consequences.
For medical, legal, or financial decisions, treat AI as a pre‑reading tool, not as the final authority.
The more important the decision (health, money, job, legal matters), the more you should:
Computer systems that can perform tasks that normally need human intelligence, like understanding language, recognising images, or making suggestions.
The “brain” of the AI: the mathematical system that has learned from lots of data and now produces answers or content.
What you type or say to the AI to tell it what you want. A good prompt is specific, clear, and includes context.
When the AI confidently gives an answer that sounds correct but is actually wrong or invented (for example, a fake quote or source).
Systematic unfairness in the AI’s outputs. This can happen when the data the model learned from reflects real‑world stereotypes or imbalances.
The previous messages in your conversation. Many tools use this to keep track of what you are talking about, but they can forget older parts.
No. It generates answers based on patterns in the data it was trained on. It does not “understand” like a human and can be wrong or outdated.
Many services store your prompts to improve the system. Avoid sending information you wouldn’t be comfortable sharing with a stranger.
Often yes – as a tutor: to explain, summarise, or quiz you. Using it to hand in work as if it were yours may break school rules.
Many companies allow AI for drafts and ideas, but forbid sharing internal or client data. Always check your workplace policy.
Many tools understand several languages and some dialects. If it struggles, try mixing in standard language or ask the AI to “translate this dialect into standard Italian/English first”.
Trust your common sense. You can ask the AI to re‑check, to show sources, or to give alternative views – and you can always ignore an answer that does not convince you.