AI CONSAPEVOLEZZA
|
|
Title of test:
![]() AI CONSAPEVOLEZZA Description: TEST E SIMULATION |



| New Comment |
|---|
NO RECORDS |
|
When using video streaming platforms or social networks (like YouTube or TikTok), AI continuously suggests highly personalized new content. How does AI collect the necessary data for this?. Through a mandatory questionnaire you fill out upon registration. By secretly reading your personal emails. By passively analyzing your behavior in real-time: how long you pause on a video, what you click, at what time you connect, and which content you skip. By sending a human operator to monitor your browsing. You've downloaded a simple 'Flashlight' app that asks for permission to access your 'Location' (GPS) and 'Contacts' upon first launch. How should you act to prevent AI systems from collecting unnecessary data about you?. Accept everything, otherwise the phone's light won't turn on. Deny these permissions in the phone's settings, as they are not functionally necessary for the app's purpose and only serve to feed profiling databases. Enter fake contacts in the address book before accepting. Immediately uninstall the phone's operating system. Voice assistants (like Alexa, Siri, or Google Assistant) use Artificial Intelligence algorithms (NLP) to understand human language. What physically happens when you ask them a question?. The voice is analyzed exclusively within the physical microphone without ever leaving your home. The audio is transmitted, processed, and often stored on the cloud servers of large tech companies to respond and further train their voice AI models. Your words are translated into Morse code and sent to the nearest radio station. No data is saved; the AI forgets your voice the moment you stop speaking. You are using a road navigation app (e.g., Google Maps). Even when not actively using it, AI can use your 'Location History' to understand where you live, work, and which stores you frequent. What is the basic setting to prevent this collection?. Put the phone in 'Airplane Mode' only when driving. Go into the app's account or privacy settings and disable the 'Location History' option or revoke background GPS access. Drive by changing roads every day to confuse the algorithm. Delete the app and only use paper maps. When you type messages on your smartphone, the 'smart' keyboard suggests the next words with extreme accuracy. How does this mechanism work?. The keyboard contains a static dictionary preloaded at the factory that never changes. There is a real person typing words in real-time to help you. A built-in Machine Learning (AI) model constantly analyzes your writing style, frequent phrases, and jargon to predict what you will type. The phone guesses words randomly, hoping to get them right. You apply a fun facial filter on a photo app. Often, the terms of service (which no one reads) state that your facial data is used to 'improve services.' What does this mean in the context of Artificial Intelligence?. That the company will use your face to print billboards in your city. That the geometric and biometric data of your face are stored to train the company's neural networks to better recognize human faces or expressions in the future. That the photo is sent to the police to check if you are a criminal. That the app will improve the camera resolution of your phone. You notice that while browsing the web, advertisements seem to 'read your mind,' showing you products you just talked about or searched for. What is the conscious digital reaction to this phenomenon?. Convince yourself that devices read thoughts through brain waves. Understand that dozens of apps and websites share your browsing data (through trackers), and AI algorithms connect these dots to make extremely accurate statistical predictions about your desires. Accept that it's magic and do nothing. Call the internet customer service to complain. You use 'Incognito' browsing on your browser to search for information about a disease, thinking that Google or Facebook's AI cannot link it to you. However, you are simultaneously logged into your Gmail account. What happens to your privacy?. You are completely safe because the tab is black and has the incognito icon. Having logged into your personal account, you have voluntarily provided your identity: the search engine's AI will still associate those searches with your real profile. The Gmail account will be blocked for security reasons. The computer will delete the disease from all online encyclopedias. In what way do platforms use tools like 'CAPTCHAs' (e.g., 'Click on all images with traffic lights') to train their Artificial Intelligence systems?. They use your clicks to verify the speed of your mouse. They only serve to waste your time to slow down servers. While proving you are not a robot, you are freely labeling thousands of real images: this data is used to train self-driving car AIs to recognize obstacles. CAPTCHAs have no utility for AI; they are only for password security. In general, what is the first and most important 'control panel' that a citizen must learn to use on their smartphone to prevent apps from collecting environmental data for their AIs?. The 'Settings -> Privacy' (or 'Permissions Management') menu, where you can revoke access to Camera, Microphone, and Location for apps that don't genuinely need them. The calculator application. The web browser's home page. The volume mute button. On Apple devices, and similarly on Android via Privacy Sandbox, when you install a new app, a pop-up often asks: 'Do you want to allow this app to track your activity across apps and websites of other companies?'. What technical identifier is blocked if you choose 'Ask App Not to Track'?. Your home address. The MAC address of your network card. The IDFA (Identifier for Advertisers) or AAID, a unique code that allows external AI algorithms to link your actions (what you buy, what you watch) across seemingly unrelated applications. Your electronic identity card number. If you disable 'Ad Personalization' in your Google or Facebook account settings, what are you technically preventing the company's Artificial Intelligence system from doing?. You are preventing the display of any advertising banners on the Internet. You are preventing the AI from using your behavioral profile (tastes, history, age) to decide which ad to show you. You will still see the same amount of advertising, but it will be generic or based only on the page you are reading. You are deleting all your data from the company's servers. You are making your account a paid one. Fitness apps or wearables (smartwatches) collect continuous biometric data (heart rate, sleep). You are often asked to connect these apps to other platforms. What is the risk related to Artificial Intelligence if you grant access without reading the terms?. That the watch will drain faster. That your raw health data may be aggregated to train third-party predictive AI models (e.g., insurance) capable of making inferences (deductions) about your health status or life expectancy. That the fitness app will start making emergency calls on its own. That the fitness app will block your credit card if you don't exercise enough. Accessing the settings of a voice assistant or operating system, you often find an option like 'Help us improve the product by sharing audio samples.' What does leaving this option active entail?. The device volume will increase automatically. You allow the company not only to process your requests via AI but also to send your private voice recordings to human reviewers or third-party machine learning systems to refine voice recognition algorithms. You download new fun voices for the assistant. You get a faster internet connection. Large technology companies use 'Data Brokers' to train their AIs. Who are they and what do they do?. They are hackers who steal data from citizens' computers. They are legal companies that collect, aggregate, and cross-reference billions of seemingly harmless data points (from loyalty cards, public records, free apps) to create incredibly detailed user profiles, which they then sell to advertising algorithms. They are government programs for national security. They are the technicians who repair broken servers. The 'Google Photos' app (or similar) has a powerful feature that automatically groups all photos of your child from birth, recognizing their changing face over time. Which privacy setting manages this AI processing?. The screen brightness of the phone. The 'Face Grouping' setting, which allows a facial recognition algorithm (computer vision) to map and biometrically scan each person in your photos. The option to activate the camera flash. Sharing the GPS location of photos. You use the 'Login with Facebook' or 'Login with Google' button to quickly register for a new language learning app. What is the cost in terms of algorithmic profiling?. The language app will delete your friends from social media. A permanent data 'bridge' is created. The social network's AI will know exactly how often and successfully you study, and the language app will receive your demographic data, enriching each other's profiling models. No cost, it's a free service offered to save you time. You have to pay a monthly fee to Facebook. On many social networks, there is a hidden function in a post's menu called 'Why am I seeing this ad?'. What is its purpose for digital awareness?. To report the ad to the police. It is a transparency tool that reveals which specific parameters (e.g., 'You visited site X', 'You are interested in gardening', 'Age 30-40') the AI algorithm used to target you. To translate the ad into another language. To download the ad's video to your phone. The phenomenon of 'Algorithmic Inference' allows AI to discover sensitive data that you have never revealed. How does it achieve this?. By sending human spies to follow you. By reading paper diaries locked in a drawer. By analyzing mathematical correlations on innocuous data. For example, by analyzing the music you listen to, when you wake up, and the pages you 'like,' the AI deduces with high probability your political, sexual, or religious orientation without you ever telling it. By asking you direct questions through telephone surveys. If you feel the need to 'reset' a social network's recommendation algorithm because the AI is trapping you in a 'Filter Bubble' showing only toxic or extreme content, what is the most radical but effective action offered by some platforms (e.g., TikTok)?. Change the Wi-Fi password. Buy a new phone. Use the 'Refresh your feeds' or 'Reset suggestions' function in privacy settings, which clears the AI's behavioral memory associated with your profile, forcing it to learn from scratch. Send an email to the app creator. If you use a web-based Generative AI model (like OpenAI's ChatGPT or Google's Gemini) to correct work texts or generate ideas, what is the default risk regarding the data you enter into the prompts?. That the text will be immediately published on your Facebook profile. By default, conversations are saved and can be used by companies to train and refine future language models (LLMs). To avoid this, you must explicitly disable 'Chat History' or disable data sharing in settings. No risk, the AI is offline and deletes everything when the computer shuts down. That the AI will take control of your mouse and modify your files. Article 22 of the GDPR (European Privacy Regulation) establishes a fundamental right of citizens concerning Artificial Intelligence. Which one?. The right to have a computer with pre-installed AI at the state's expense. The right not to be subjected to a decision based solely on automated processing (including profiling) which produces legal or significant effects on the person (e.g., rejection of a mortgage, failing school), with the right to human intervention. The right to use AI to do homework without being punished. The right to sell one's data at auction on the internet. What are 'Shadow Profiles' created by the Artificial Intelligence systems of large platforms?. Profiles of deceased people. Fake accounts used by Russian trolls. Extremely detailed behavioral profiles that AI builds on individuals who are not even registered on the platform, inferring their information by cross-referencing data, address books, and photos uploaded by their friends who do have an account. Profiles that activate only at night (Dark Mode). You are the manager of your school's website. You want to prevent technology companies from sending their bots (web crawlers) to scrape original texts, circulars, and teacher lessons to train their Generative Artificial Intelligences without permission. What is the technical setting to modify?. Set a password for the school's Wi-Fi access. Disable search engine indexing. Insert specific instructions (e.g., disallow for bots like GPTBot) in the website's robots.txt file or add specific 'noai' meta-tags to pages to deny access to training algorithms. Put a handwritten legal notice at the bottom of the web page. In the context of data privacy and AI, 'Synthetic Data' is increasingly discussed. What is it used for?. It's data created in a chemical laboratory. It is false data generated by Artificial Intelligence that maintains the same statistical properties as real data (e.g., medical records), allowing researchers to train algorithms and study phenomena without exposing the sensitive data of real people. It is data that slows down AI execution to make it safer. It is data illegally extracted from credit cards. When a company or school decides to use an artificial intelligence system to automatically screen resumes (CV screening) or evaluate student entrance tests, what social and data problem arises as 'Algorithmic Bias'?. The AI refuses to work on Sundays. The AI cannot read PDF files. If the AI was trained on historical human data that contained biases (e.g., preference for a certain gender or ethnicity), the algorithm will not be neutral but will learn to automate and amplify those same invisible discriminations on a large scale. The AI always assigns the maximum score to all candidates to avoid problems. To track you even if you delete cookies or use incognito browsing, advertising network AI uses 'Browser Fingerprinting.' Which of these data sets is collected to create your 'digital fingerprint' without installing any files?. Your bank details and ATM PIN. Photos from your gallery and WhatsApp messages. Seemingly innocuous technical parameters of your computer: screen resolution, installed fonts, exact version of the operating system, and graphics card. The cross-referencing of these data makes your browser unique among millions. Your blood type and heart rate. Many modern corporate 'Data Lakes' collect raw, unstructured data. What is the privacy challenge when a company applies NLP (Natural Language Processing) or AI algorithms to these archives?. The servers overheat and burn the data. Raw data, such as customer support chats or emails, often contains scattered personal data (PII). The AI could ingest and analyze it, and later inadvertently reveal it in response to other users, severely violating confidentiality. The AI translates the data into an incomprehensible language. The data suddenly becomes paid. When interfacing with online services, you often encounter 'Dark Patterns' aimed at data harvesting (data collection for AI). Which of these describes a dark pattern?. A clear and simple notice asking if you want to subscribe to the newsletter. An interface where the button to 'Deny consent for data analysis' is light gray, hidden under three submenus, while the 'Accept and allow AI to personalize' button is huge, green, and flashing in the center of the screen. A web page that loads very slowly due to graphics. A screen filter that darkens colors at night to rest your eyes. Emerging regulations, like the European AI Act, introduce strict rules on the use of remote biometric identification systems (like AI facial recognition in squares or stadiums) by public entities. What is the basic principle?. They are prohibited a priori for mass real-time surveillance of public spaces, except for very strict pre-authorized exceptions for searching crime victims or preventing specific terrorist threats. They are mandatory in every street corner to replace human law enforcement officers. They are allowed only if the AI is developed by a European company. They can be used freely by anyone, even private citizens to spy on neighbors. One of the privacy frontiers for AI training on smartphones, used for example by the Gboard keyboard or for learning typing styles, is 'Federated Learning.' How does this architecture protect the user's sensitive data?. By completely erasing the phone's memory every night. The AI model is trained locally on the user's device. Raw data (typed phrases) is never sent to the company's central server, only the 'mathematical updates' of the model (neural weights) are sent, which are summed with those of millions of other users to improve the global algorithm. By sending data to a government server instead of a private company's server. By encrypting the keys so that the user themselves does not know what they are typing. Apple and other giants are moving much AI processing (like FaceID facial recognition or photo semantic analysis) to 'Edge AI' using dedicated processors (NPUs - Neural Processing Units) directly on devices. What is the primary benefit for data privacy?. Edge AI makes the phone much cheaper to produce. Since inference (algorithm processing) happens entirely on local hardware (at the network's edge), highly sensitive biometric or visual data does not need to travel over the internet and is not stored in any corporate Cloud. Edge AI consumes data from the cellular plan instead of Wi-Fi. It replaces the need to use HTTPS encryption. Large technology companies collect 'Telemetry Data' at the Operating System level (e.g., Windows 11) in the background to diagnose errors and train optimization models. What is essential to know at the system administration level?. That telemetry can never be disabled in any way. That telemetry only collects the time the PC is turned on. That there are different levels of telemetry (e.g., Basic vs. Full) configurable in Group Policy (GPO) or Registry, and that the Full level can send fragments of RAM and documents in use to external servers if a crash occurs, feeding diagnostic machine learning pipelines. That telemetry is solely used to update the computer's time zone. An advanced mathematical concept integrated into AI systems (e.g., by Apple or the US Census Bureau) to protect datasets is 'Differential Privacy.' What is its fundamental mechanism?. It blocks the connection if it detects a minor using AI. It intentionally injects controlled 'statistical noise' into the data before aggregation. The AI can still learn accurate patterns about the general population (e.g., 'most searched song'), but it is mathematically impossible to 'reverse engineer' to identify the input or identity of a specific single individual. It allows users to pay to differentiate their profile from others. It divides data into different folders based on the user's name initial. A company uses OpenAI's APIs to integrate GPT-4 into its internal contract analysis software. What crucial privacy difference usually exists between data sent via Enterprise APIs versus data typed into the free consumer web interface, ChatGPT?. There is no difference: the terms of service are identical, and the AI uses everything for training. Data passed via APIs is public and visible on the Internet by law. According to standard security policies (like OpenAI's for APIs), data sent via paid API endpoints is not used by the provider to train or improve base AI models, ensuring corporate confidentiality. APIs only work in English and do not understand Italian data. If an company's AI system is trained on third-party databases purchased online, it risks the phenomenon of 'Data Poisoning' carried out by users or specific tools (like Nightshade or Glaze). What is it?. Sending traditional viruses via email to the company's servers. It is a defense/attack technique where creators imperceptibly alter the pixels of their images. To the human eye, the image is normal, but when the generative AI 'scrapes' that image for training, the model is mathematically corrupted (e.g., it learns that an image of a dog is actually a cat). Physically destroying server hard drives by immersing them in toxic liquids. Writing negative reviews on forums to ruin the AI company's reputation. To build applications based on corporate LLMs, such as the RAG - Retrieval-Augmented Generation technique, company data is transformed and stored in 'Vector Databases.' What is the critical security risk if tracking and access settings (RBAC) are not configured on the RAG infrastructure?. The database becomes too heavy, and the server shuts down. The risk of 'Semantic Data Leakage': if the LLM has access to the entire vector database without user-based permission restrictions, a low-level employee could ask a question in natural language, and the AI would generate accurate summaries of confidential CEO documents or salary data. The vector database transforms company texts into images that violate copyright. The AI automatically deletes documents older than 3 years without notifying anyone. When providing customer chat logs, e.g., technical support, to a Machine Learning system for 'Fine-Tuning' (fine-tuning training), what architectural process must absolutely precede data input to avoid privacy disasters like 'LLM Memorization'?. Translating all logs into binary code (0s and 1s). The Redaction of PII (Personally Identifiable Information): the use of automated algorithms that find and permanently replace names, credit card numbers, addresses, and fiscal codes with fictitious strings or placeholders BEFORE the AI model sees the text. Increasing the font size of the logs to facilitate AI reading. Written approval of the text by a lawyer. In the field of online advertising (AdTech) post-third-party cookies, the TCF v2.2 framework requires Service Providers to structure user consent into machine-readable strings. What is the role of CMPs (Consent Management Platforms) concerning advertising Artificial Intelligence systems (Bidders)?. CMPs directly pay citizens for their data in cryptocurrencies. CMPs automatically close browser tabs if they detect an external AI. They act as a 'Cryptographic Gate': they encode the user's privacy preferences and technically prevent (drop/block) external Artificial Intelligence algorithms from activating their tracking pixels or participating in real-time bidding (RTB) if the user has not explicitly consented to that specific profiling purpose. CMPs are AI bots that browse the internet for you to save you time. What is the fundamental difference between opt-out for classic 'Direct Marketing' and opt-out for AI model training (Model Training Opt-out) at the level of data rights?. Marketing opt-out blocks emails, while opt-out from AI training is technically much more complex for the company: if the user revokes consent after their data has been incorporated and trained into the model's mathematical weights (neural network), removing that influence often requires 'Machine Unlearning,' a still unresolved technological challenge that may force the company to retrain the model from scratch. There is no technical difference; both only require deleting a line in an Excel file. Opt-out from training can only be done by sending a fax to the European Parliament. Direct marketing is legal, while training an AI is always and invariably considered a criminal offense worldwide. |




