option
Questions
ayuda
daypo
search.php

AI domain 2

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
AI domain 2

Description:
AI security

Creation Date: 2025/10/22

Category: Others

Number of questions: 30

Rating:(0)
Share the Test:
Nuevo ComentarioNuevo Comentario
New Comment
NO RECORDS
Content:

Which framework specifically lists the top vulnerabilities for large language model applications?. MITRE ATLAS. OWASP LLM Top 10. STRIDE. MIT AI Risk Repository.

What is the main focus of MITRE ATLAS?. Hardware vulnerabilities in GPUs. Adversarial tactics and techniques targeting AI systems. AI compliance with GDPR. Data labeling accuracy.

Which organization is extending the CVE system to include AI model vulnerabilities?. OWASP Foundation. IEEE AI Security Board. CVE AI Working Group. ISO/IEC 27017 Committee.

The STRIDE framework element Tampering would most likely apply to which AI-specific threat?. Model overfitting. Data poisoning. Adversarial examples. Excessive agency.

What does Excessive Agency from OWASP LLM Top 10 warn about?. The model returning hallucinated answers. The model autonomously performing unauthorized actions. Users manipulating prompts for data exfiltration. The AI revealing its system prompt.

Which control type ensures models behave predictably through rules and templates?. Model guardrails. API tokens. Redaction filters. Encryption layers.

What is a prompt firewall primarily designed to stop?. Network eavesdropping. Model denial-of-service. Prompt injection and jailbreaks. Overfitting in model training.

Why are rate and token limits important for AI security?. They reduce model hallucination. They improve output accuracy. They prevent Model Denial of Service (MDoS). They restrict data labeling.

What is a prompt template’s security function?. Encrypting user data. Embedding guardrail instructions and limitations. Anonymizing responses. Evaluating model accuracy.

Red teaming in AI security primarily focuses on: Model performance optimization. Attempting to bypass guardrails and safety filters. Improving computational efficiency. Auditing hardware integrity.

The least privilege principle in AI systems means: Granting full model access to all users. Only authorized entities have the minimal required permissions. Allowing users to modify model weighs. Sharing API keys among teams.

Why is mutual TLS (mTLS) recommended for model APIs?. To reduce network latency. To authenticate both client and server securely. To compress traffic. To limit inference costs.

Which of the following best protects against unauthorized data access?. Unauthenticated APIs. Role-Based Access Control (RBAC. Public dataset sharing. Open network segmentation.

Which access control prevents lateral movement across the enterprise network?. Tokenization. Rate limiting. Network segmentation. Model distillation.

Which control protects data while being processed?. Encryption at rest. Trusted Execution Environment (TEE). TLS 1.3. Tokenization.

What’s the main difference between anonymization and masking?. Masking is irreversible; anonymization is reversible. Anonymization permanently removes identifiers; masking replaces them with fake but realistic data. Both are used only for production data. Masking applies only to metadata.

What is the purpose of data classification labels?. To optimize neural network weights. To assign access and protection levels. To compress datasets. To enable faster model evaluation.

Limiting the collection and storage of only necessary data reflects which privacy principle?. Data minimization. Model inversion. Prompt sanitization. Cost optimization.

What does prompt and response monitoring help detect?. Overfitting. Prompt injection or policy violations. Hardware degradation. Data duplication.

Why must AI logs be sanitized before storage?. To remove duplicates. To prevent data leakage of PII or proprietary information. To improve search speed. To comply with schema validation.

Which activity might indicate a model theft attempt?. Frequent GPU usage alerts. Sudden spike in inference API queries. Declining model accuracy. Missing system patches.

Which compensating control helps reduce leakage after a Membership Inference attack?. Increased token limits. Noise injection during inference. Model pruning. Gradient boosting.

Which attack involves manipulating model inputs to override safety instructions?. Model inversion. Prompt injection. Model theft. Data poisoning.

Which type of poisoning occurs when malicious data is added during training?. Inference poisoning. Training data poisoning. Output corruption. Supply-chain compromise.

In a membership inference attack, the attacker aims to: Inject fake data into the model. Determine if a specific record was used in training. Steal API keys. Cause output hallucinations.

Repeated querying of a model to replicate its behavior is known as: Model inversion. Model theft. Supply-chain poisoning. Adversarial perturbation.

Subtle pixel changes that cause a vision model to misclassify an image represent what type of attack?. Adversarial example. Jailbreak. Model poisoning. Data minimization.

An LLM plug-in executes external code due to poor permission handling. Which issue is present?. Overfitting. Insecure plugin design. Model drift. Token overflow.

A malicious update to an open-source ML library used in training would be an example of: Supply chain attack. Prompt injection. Model theft. Data minimization.

Which describes AI hallucination?. Model forgetting training data. Model generating false but plausible information. Model experiencing denial of service. Model leaking training data.

Report abuse