CompTIA XK0-005 Linux+ Exam Dumps and Practice Test Questions Set 1 1-20

Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.

Question 1:

A Linux administrator needs to create an LVM snapshot of a logical volume before performing a risky application upgrade. Which command will correctly create a snapshot named appsnap of the logical volume /dev/vgdata/lvapp with a size of 2G?

A) lvextend -L 2G -n appsnap /dev/vgdata/lvapp
B) lvcreate -L 2G -s -n appsnap /dev/vgdata/lvapp
C) lvcreate -n appsnap -L 2G /dev/vgdata/lvapp
D) lvconvert –snapshot -n appsnap /dev/vgdata/lvapp

Answer:

B

Explanation:

The lvcreate command is the correct method for generating both standard logical volumes and snapshots, and the presence of the -s flag is the essential indicator that a snapshot is being created. Using -L specifies the size of the snapshot, while -n names the new logical volume. Option B includes these required components in the correct order and points to the original logical volume /dev/vgdata/lvapp, making it the correct snapshot creation command.

Option A is incorrect because lvextend merely expands an existing LV and is not used for creating snapshots. It would not produce a point-in-time copy regardless of the parameters provided.

Option C contains the correct naming and sizing flags but lacks the -s snapshot flag, which means it would generate a normal LV instead of a snapshot, failing to preserve a point-in-time state before the upgrade.

Option D uses lvconvert, a command suitable for transforming existing LVs or merging snapshots, but the syntax shown does not properly create a snapshot from scratch. It is not the appropriate tool or format for this initial snapshot creation task. Therefore, Option B is the only valid and complete command for generating the snapshot needed.

Question 2:

A Linux administrator needs to identify which systemd service failed during the last boot sequence. Which command will provide a detailed list of failed units so the administrator can examine their status?

A) systemctl list-timers
B) systemctl list-units –failed
C) systemctl isolate rescue.target
D) journalctl –disk-usage

Answer:

B

Explanation:

The correct command for reviewing which systemd services have failed is systemctl list-units –failed, and this option is designed specifically for filtering and presenting only those systemd units that encountered issues. When a Linux system boots, systemd processes numerous units such as services, sockets, mounts, automounts, timers, and devices. Any of these may fail due to dependency issues, incorrect configuration, unmet prerequisites, missing files, or misbehaving applications. The administrator’s objective is to pinpoint these failed units immediately without having to filter through the full list of active, inactive, loaded, or unloaded units. The systemctl list-units –failed command accomplishes this task efficiently by printing a concise table listing the failed units, along with their load state, active state, substate, and description. This targeted output allows administrators to quickly identify what went wrong during boot. After identifying the failed units, the administrator can run commands such as systemctl status unitname or inspect logs via journalctl -u unitname to investigate the reasons for failure.

Option A, systemctl list-timers, does not relate to failed services. It is used for listing active and inactive timed units controlled by systemd timers. Those units are scheduled tasks similar to cron jobs, and this command shows when timers last ran and when they are scheduled to run next. While helpful for diagnosing scheduled task behavior, it does not provide any information about failing services or units. Therefore, it does not address the administrator’s need to review failure results from the previous boot.

Option C, systemctl isolate rescue.target, is used for switching the system into rescue mode, which is similar to single-user mode. While this can help diagnose problems, especially when critical services cause boot failures, it is not a diagnostic command. It will not list failed units nor produce any detailed output. Instead, it changes the system’s operating state, which could disrupt running services or sessions. The command is powerful but unrelated to listing previous failures.

Option D, journalctl –disk-usage, displays the amount of disk space consumed by the systemd journal. This is helpful for maintenance tasks, such as cleaning logs when space is limited, but it does not provide information about service failures. It lacks any capability to display unit states or summarize failures.

Thus, option B stands out as the correct command because it produces exactly the type of information needed, focuses solely on failed units, and helps the administrator triage and resolve underlying issues efficiently. The output is easy to interpret and ties directly into other systemd diagnostic commands, making it the best choice when reviewing failures after a system boot.

Question 3:

A Linux engineer must configure a network interface with a static IP address on a system using NetworkManager. Which command allows the engineer to modify an existing connection profile called officeLAN and assign it a static IP of 192.168.10.25/24 along with the gateway 192.168.10.1?

A) nmcli con modify officeLAN ipv4.addresses 192.168.10.25/24 ipv4.gateway 192.168.10.1 ipv4.method manual
B) ifconfig officeLAN 192.168.10.25 netmask 255.255.255.0
C) ip addr add 192.168.10.25/24 dev officeLAN && ip route add default via 192.168.10.1
D) systemctl restart network.target

Answer:

A

Explanation:

When configuring a system that uses NetworkManager, the appropriate tool for changing persistent network profiles is nmcli. The command in option A correctly modifies the connection profile named officeLAN and sets its IPv4 addressing method to manual while defining the IP address and gateway. NetworkManager stores persistent profiles that specify how interfaces should behave when they are activated. Using nmcli con modify is the correct method for adjusting these profiles because it allows administrators to set static addressing, DNS servers, gateways, routes, and other network behaviors in a way that is preserved across reboots and interface restarts. The parameters ipv4.addresses and ipv4.gateway explicitly define the static settings, while ipv4.method manual ensures that DHCP is not used. This makes option A the correct and complete method for configuring a static IP within a NetworkManager environment.

Option B uses ifconfig, an older network management tool now deprecated on most modern Linux distributions. Additionally, ifconfig usage does not modify persistent configuration. Even if it configured the address temporarily, the configuration would be lost after reboot or after the interface is reset. Therefore, while historically useful, it does not meet the requirement for modifying a NetworkManager profile.

Option C does set a static address using ip addr add and sets a default route via ip route add, but like ifconfig, these changes are not persistent. They would apply only until the next network restart. Additionally, NetworkManager may overwrite these values when it activates the connection profile because NetworkManager assumes control of interfaces. Thus, this approach cannot be relied upon for stable configuration under NetworkManager.

Option D does not configure anything. Restarting network.target or related services will apply existing configuration but will not add or modify IP settings. Restarting services is an administrative action used after configuration changes, not a configuration method itself. Therefore, it does not address the requirement.

Option A is correct because it uses the proper tool, affects persistence, directly modifies the relevant profile, and fully defines the necessary static IP and gateway settings expected in a NetworkManager-managed networking environment.

Question 4:

A system administrator wants to find all files on a Linux system owned by the user student and also restrict the search to files modified within the last 3 days. Which command best accomplishes this?

A) find / -mtime -3
B) find / -user student -mtime -3
C) locate student -mtime 3
D) grep –owner student -mtime -3 /etc/passwd

Answer:

B

Explanation:

The command that satisfies all search criteria is find / -user student -mtime -3, because it performs a filesystem traversal beginning at the root directory and filters results by both ownership and modification time. The find command is a powerful and flexible utility for searching files based on attributes such as permissions, ownership, timestamps, type, names, and sizes. The -user student option ensures that only files owned by the specified user are included, while -mtime -3 limits results to files modified within the last 3 days. The combination of both makes option B the correct answer since it meets both conditions precisely.

Option A includes only -mtime -3, which would list files modified within the timeframe but would not restrict results to files owned by the user student. As a result, it would produce far more matches than desired. This makes it insufficient for cases when the administrator needs to specifically track files belonging to one user.

Option C does not use find at all. The locate command queries a prebuilt database that contains file names and paths, not metadata such as modification timestamps or ownership. It is useful for quickly locating files by name, but it cannot evaluate modification time or ownership criteria. Therefore, it fails to meet both search requirements.

Option D is invalid because grep does not search based on file metadata. It searches text inside files. Searching the /etc/passwd file using grep may help identify a user’s UID or username, but it cannot locate files owned by a user or consider modification times. The syntax here is also incorrect because grep does not have parameters such as –owner or -mtime.

Thus, option B correctly uses the appropriate tool and syntax to search the entire filesystem for files owned by student and modified recently. The flexibility of find allows administrators to build complex queries, and combining -user with -mtime is a common and effective method for identifying recent changes that may relate to suspicious activity, misconfigurations, or user actions. The ability to specify both ownership and modification age makes option B the only command that exactly satisfies the stated requirement.

Question 5:

A Linux technician needs to archive the /var/log directory into a compressed tarball named logs.tar.gz while preserving file permissions and ownership. Which command will achieve this result?

A) gzip /var/log logs.tar.gz
B) tar -czvf logs.tar.gz /var/log
C) zip -r logs.tar.gz /var/log
D) compress -rf logs.tar.gz /var/log

Answer:

B

Explanation:

The correct command is tar -czvf logs.tar.gz /var/log, because tar is the standard Linux tool for creating archives, and the combination of flags -c, -z, -v, and -f instructs tar to create a new archive, compress it with gzip, show verbose output, and write it to a file. Importantly, tar inherently preserves file permissions, ownership, symbolic links, and directory structure unless instructed otherwise, making it ideal for archiving system directories such as /var/log. The resulting logs.tar.gz file will contain the entire directory tree with all log files intact and properly compressed. Administrators commonly use this command for backup, rotation, transfer, and archival tasks because it provides an efficient and reliable method for bundling directories while maintaining metadata integrity.

Option A is incorrect because gzip expects a single file, not a directory. Gzip alone cannot compress a directory or produce a .tar.gz file without tar. Attempting to run gzip /var/log logs.tar.gz would fail because gzip does not take an output filename parameter in this way and cannot operate on directories. Therefore, it cannot generate the required tarball while preserving permissions.

Option C uses the zip command, which is not the standard tool for preserving Linux file permissions or ownership. While zip can recursively compress directories, it lacks the full preservation capability that tar offers regarding Unix permissions, ownership bits, and symbolic links. This makes zip unsuitable for system-level archives where maintaining permission integrity is essential. Additionally, zip typically produces .zip files rather than .tar.gz archives.

Option D uses compress, which is an older legacy utility largely replaced by gzip and bzip2. Furthermore, the syntax shown does not reflect how compress operates, and it cannot create tarballs. Compress works on single files similarly to gzip; thus it does not handle directory archiving or produce tar-format archives. It cannot achieve the desired result.

Because tar remains the most widely accepted and feature-complete tool for this type of archiving, option B is the only command that correctly creates a compressed archive while fully preserving metadata, structure, and properties of the /var/log directory. It is the precise tool required for generating a .tar.gz file suitable for backup, transfer, and long-term storage.

Question 6:

A system administrator wants to schedule a backup script located at /usr/local/bin/dbbackup.sh to run every day at 2:30 AM. Which crontab entry correctly schedules this job for a user-level crontab?

A) 30 2 * * * /usr/local/bin/dbbackup.sh
B) 2 30 * * * /usr/local/bin/dbbackup.sh
C) @daily 2:30 /usr/local/bin/dbbackup.sh
D) 0 2:30 * * * /usr/local/bin/dbbackup.sh

Answer:

A

Explanation:

The correct crontab entry for running a backup script at 2:30 AM every day is 30 2 * * * /usr/local/bin/dbbackup.sh, which matches the standard crontab format of minute, hour, day of month, month, and day of week. Cron requires the minute field to come first, followed by the hour field. Therefore, to run at 2:30 AM, the minute is set to 30 and the hour is set to 2. This is exactly what option A provides. Cron then interprets the remaining * values as wildcards, meaning the job should run every day of the month, every month, and every day of the week. This ensures the backup script runs consistently without manual intervention. Cron is widely used in Linux for job scheduling because it is reliable, established, and accepted across nearly all distributions. Whether the user invokes crontab -e to set a personal crontab or edits /etc/crontab or files in /etc/cron.d for system-level jobs, the ordering of fields remains the same. The correctness of option A lies in its precise adherence to these expected field rules.

Option B reverses the hour and minute values. Cron does not interpret 2 30 * * * as 2:30 because the first field always represents minutes. Therefore, 2 30 * * * would schedule the command to run at 30 minutes past 2 AM, which is actually 2:30? No—because the second field (30) is interpreted as the hour, which means the job would run at minute 2 of the 30th hour, which is invalid. Since there is no 30th hour in a day, this entry is not meaningful in cron syntax. Cron may reject it or behave unpredictably. Thus, option B is fundamentally incorrect.

Option C appears to use a shortcut syntax. Cron does support special strings such as @daily, @hourly, @yearly, and others, but they do not accept time modifiers. The @daily directive always runs at midnight and cannot specify a custom time. Therefore, @daily 2:30 /usr/local/bin/dbbackup.sh is invalid syntax that cron will not interpret. The job will fail to execute or fail to load entirely.

Option D attempts to specify 2:30 by placing 2:30 in the hour field, but cron does not recognize time values containing a colon. Instead, it expects integers for each field. There is no syntactic construct in standard cron syntax that allows specifying a time as 2:30. The time must always be provided as separate integers representing minutes and hours in their respective fields. As a result, 0 2:30 * * * is also invalid.

Thus, option A is the only entry that follows the correct field structure for cron, correctly represents the intended time, and ensures daily execution. Using correct cron syntax is essential for maintaining predictable automation. For reliable system operations, administrators must ensure that the format and syntax align with cron’s strict parsing rules, making 30 2 * * * /usr/local/bin/dbbackup.sh the only valid answer.

Question 7:

A Linux engineer needs to temporarily mount an NFS share located at nfsserver:/data/files to the directory /mnt/shared using options that disable file locking and ensure the mount does not attempt to reconnect after failure. Which mount command is most appropriate?

A) mount -t nfs nfsserver:/data/files /mnt/shared -o nolock,soft
B) mount -t nfs nfsserver:/data/files /mnt/shared -o hard
C) mount nfsserver:/data/files /mnt/shared –retry
D) mount -t nfs -o defaults,lock nfsserver:/data/files /mnt/shared

Answer:

A

Explanation:

The correct command is mount -t nfs nfsserver:/data/files /mnt/shared -o nolock,soft, because this combines the two required behaviors: disabling file locking and preventing hard retry attempts. The nolock option instructs the NFS client to bypass NLM-based file locking mechanisms, which is appropriate in environments where lock daemons are not running or where locking is unnecessary. This is common in lightweight or temporary mount scenarios. The soft option instructs the NFS client to stop retrying indefinitely when a server becomes unresponsive. With soft mounts, timeouts result in I/O errors rather than continuous retry attempts, ensuring that the client does not hang. This choice supports the requirement for a nonpersistent, temporary mount that should not wait indefinitely for a server to become available again. The combination of nolock and soft makes option A correct.

Option B uses the hard option, which causes the NFS client to retry operations indefinitely if the server fails to respond. This is appropriate for critical or mandatory operations where data integrity and reliability are essential, but it contradicts the requirement for a mount that should not attempt reconnection after failure. Hard mounts are typically used for persistent file systems where hanging or retrying is preferred over data corruption. Since the prompt explicitly requests the opposite behavior, option B is unsuitable.

Option C includes the –retry parameter, but this option is not part of the mount command for NFS. Retry behaviors are governed by options such as hard, soft, timeo, and retrans. Using –retry would lead to a syntax error or be ignored depending on the distribution. It also fails to specify disabling file locking, making it incorrect for both requirements.

Option D includes defaults,lock, which both contradict the requirement to disable file locking and fails to prevent reconnection attempts. The lock option enables advisory locking via the NLM locking protocol, which is the opposite of what the engineer needs. Defaults also applies standard mount options, which do not include soft. Because neither of the required behaviors is implemented, option D is also incorrect.

Therefore, option A is the only mount command that satisfies the requirement for a temporary mount without locking mechanisms and without persistent retries. Temporary NFS mounts often need quick responsiveness and minimal system impact. When a server is likely to be unstable or when the mount is only needed briefly, using soft mounts and disabling locking ensures that the client system can continue functioning even when the remote server becomes unavailable. The correct combination of options is essential to prevent system hangs and ensure predictable behavior. Option A meets this need exactly.

Question 8:

A security administrator wants to verify the integrity of a downloaded ISO file named distro.iso by generating a SHA-256 checksum and comparing it to the published value. Which command will generate the correct checksum?

A) md5sum distro.iso
B) sha256sum distro.iso
C) shasum distro.iso –md5
D) gpg –verify distro.iso

Answer:

B

Explanation:

The correct command to generate a SHA-256 checksum is sha256sum distro.iso. This tool calculates the SHA-256 message digest of a file, producing a long hexadecimal string that uniquely identifies the file’s contents. Integrity verification through checksums is essential for ensuring that the ISO has not been corrupted or tampered with during download. Most Linux distributions publish SHA-256 checksums on their download pages, and users compare these values to validate authenticity. The sha256sum command is part of the GNU coreutils package, making it available on virtually all Linux distributions by default. Running sha256sum distro.iso outputs a checksum that can be compared manually or automatically with a known-good value. This process ensures that every bit of the ISO file is correct and unmodified.

Option A calculates an MD5 checksum. MD5 is considered weak because it is vulnerable to collisions. While some sites still publish MD5 hashes for quick integrity checks, they should not be used for security-sensitive verification such as confirming authenticity. Because the administrator specifically requests a SHA-256 checksum, MD5 does not meet the requirement. MD5 hashes can match even when files differ, making them unsuitable for modern integrity verification.

Option C uses shasum with an argument instructing it to use MD5, not SHA-256. This makes it equivalent to running an md5sum-style command. In addition to failing to provide a SHA-256 checksum, this approach incorrectly applies a weaker hashing algorithm. While shasum can generate SHA-1 or SHA-256 digests with the correct parameters, this option explicitly chooses MD5, making it incorrect.

Option D attempts to use gpg –verify, which is used for signature verification, not checksum computation. GPG signatures rely on asymmetric cryptography to validate that a file came from a trusted source. This requires a .sig or .asc signature file, which must be provided separately. It does not compute or compare a SHA-256 checksum. While GPG verification is often used alongside checksum verification for enhanced security, the prompt specifically asks for generating a SHA-256 checksum from the ISO, making this option incorrect.

Thus, the only tool that meets the security administrator’s requirement is sha256sum distro.iso. Using SHA-256 is a strong method for verifying file integrity because the algorithm is resistant to collisions and is widely accepted as secure. This ensures that users can rely on the checksum to confirm authenticity and detect corruption. The output is predictable, standardized, and supported across all Linux environments, making sha256sum the correct and necessary command for this task.

Question 9:

A Linux administrator wants to analyze running processes and identify which ones are consuming the most memory on a system. Which command provides an interactive, real-time display sorted by memory usage?

A) ps -ef
B) free -m
C) top
D) systemctl status

Answer:

C

Explanation:

The correct command for interactive, real-time monitoring of memory usage is top. When executed, top launches a full-screen interface that updates at regular intervals, showing processes sorted according to CPU usage by default. However, users can press the M key while inside top to sort processes by memory consumption. This allows administrators to quickly determine which processes are using the most RAM. Top also shows system-level statistics such as total memory, swap activity, load average, and individual process attributes including user, PID, CPU utilization, memory percentage, and command name. These capabilities make top one of the most frequently used tools for system performance monitoring in Linux environments. Administrators rely on it when diagnosing performance issues, identifying memory leaks, or determining resource-hungry applications.

Option A, ps -ef, does display a list of running processes but it is not interactive and does not dynamically sort by memory. While ps is powerful and can be combined with flags to sort or filter by memory usage, it does not provide real-time updates unless executed repeatedly. Because the question requires an interactive, real-time display sorted by memory usage, ps -ef does not meet the requirement.

Option B, free -m, provides system-wide memory statistics but does not show processes. It reports total used and free RAM, buffers, caches, and swap usage. Although it is excellent for viewing overall memory pressure, it cannot identify specific processes responsible for high memory usage. Therefore, it cannot fulfill the requirement of process-level memory analysis.

Option D, systemctl status, shows the status of systemd services and is used for service management. It is not intended for general process monitoring and does not show memory usage across all processes. Instead, it provides details such as whether a service is active, failed, or inactive, as well as relevant logs. This makes it unsuitable for evaluating memory consumption among running processes.

Top is widely used because it is available on nearly all Linux systems by default and provides comprehensive real-time monitoring. Administrators rely on features such as filtering, sorting, sending signals to processes, and adjusting priority levels directly within the tool. It serves as a convenient interface for diagnosing memory hogs, runaway tasks, and system overload conditions. Because it matches the question’s requirement exactly and offers both interactive features and memory sorting capability, option C is the correct answer.

Question 10:

A Linux server administrator wants to configure sudo so that a user named analyst can run only the /usr/bin/systemctl command without needing to provide a password. Which line should be added to the sudoers file using visudo?

A) analyst ALL=(ALL) NOPASSWD: ALL
B) analyst ALL=(root) /usr/bin/systemctl
C) analyst ALL=(root) NOPASSWD: /usr/bin/systemctl
D) analyst /usr/bin/systemctl NOPASSWD: ALL

Answer:

C

Explanation:

The correct sudoers entry is analyst ALL=(root) NOPASSWD: /usr/bin/systemctl because this line grants the user analyst permission to execute only the systemctl command as root and does so without requiring a password. The structure of a sudoers rule typically follows the format user hosts=(runas) options: commandlist. In this case, analyst is the user being configured. The hosts field is set to ALL, meaning the rule applies on all hosts where the sudoers file is present. The runas field is (root), specifying that systemctl must be executed as the root user. The NOPASSWD tag applies to the following command, allowing execution without prompting for a password. Finally, /usr/bin/systemctl is the only command permitted, ensuring restrictions are tight and controlled.

Option A grants analyst passwordless sudo access to all commands. This is far too permissive and contradicts the requirement to restrict the user to one specific command. Giving unrestricted root-level access to a user introduces major security risks and violates the principle of least privilege.

Option B allows the analyst user to run systemctl as root, but it does not remove the password requirement. Because the administrator specifically wants to avoid password prompts, option B fails to meet a critical part of the requirement.

Option D is syntactically incorrect. The sudoers file requires the general structure user hosts=(runas) options: commandlist. This option reverses the ordering and places NOPASSWD incorrectly. It would not be interpreted correctly by sudo and therefore cannot accomplish the required behavior.

The correct line, option C, explicitly ties the NOPASSWD directive to the single allowed command. This ensures that the analyst user cannot escalate privileges through other commands or access shells, scripts, or arbitrary executables. Limiting sudo rights to a single command is a common administrative practice when delegating narrow system responsibilities such as restarting services, checking service status, or reloading units. The systemctl command controls systemd services, and granting access to it can be useful for helpdesk personnel or monitoring users, provided the access is tightly scoped with sudoers rules.

Therefore, option C correctly applies sudoers syntax, meets the requirement for passwordless execution, and ensures proper access restriction.

Question 11:

A Linux administrator needs to restrict SSH access so that only the users sam and devops can log in remotely, while all other users must be denied. Which configuration change in /etc/ssh/sshd_config will properly enforce this requirement?

A) DenyUsers sam devops
B) AllowUsers sam devops
C) PermitRootLogin no
D) Match User sam,devops AllowTcpForwarding yes

Answer:

B

Explanation:

To restrict SSH access so that only specific users can authenticate, the AllowUsers directive in /etc/ssh/sshd_config is the correct and intended configuration option. When AllowUsers is defined, SSHD permits login only for those users listed in the directive, and all others are implicitly denied unless explicitly listed. Therefore, AllowUsers sam devops satisfies the requirement exactly because only sam and devops will be able to log in through SSH, and the restriction is enforced immediately after SSHD is reloaded or restarted. The AllowUsers directive also supports patterns, host-based entries, and account-specific rules, but in its simplest form, listing usernames directly ensures precise control over which accounts can authenticate remotely.

Option A, DenyUsers sam devops, performs the opposite behavior. Instead of allowing only sam and devops, it would deny those two accounts and allow everyone else. This contradicts the requirement. DenyUsers is used when an administrator wants to forbid a specific list of users while permitting all others. Because the question specifies the need to allow only two users while restricting everyone else, DenyUsers cannot be used to satisfy that goal.

Option C, PermitRootLogin no, disables SSH login for the root user. This is a common hardening practice but does nothing to restrict SSH access to only sam and devops. PermitRootLogin has no influence on normal users and therefore does not meet the requirement. Even with this directive set to no, all other non-root users would still be able to log in by default.

Option D uses a Match block, which can set conditional rules, but AllowTcpForwarding has no relationship to SSH login permission. Match blocks can control authentication methods, environment settings, and permissions based on user, group, address, or host, but the directive shown here only controls TCP forwarding, not login access. Therefore, this option does not fulfill the requirement of restricting user logins and is irrelevant to SSH authentication control.

Using AllowUsers sam devops in sshd_config meets the requirement because it directly limits access to only the specified accounts. After saving the file and restarting the SSH daemon with a command such as systemctl restart sshd, only the two permitted users will be authenticated remotely. This is consistent with standard Linux hardening practices, especially on production servers where minimal access is required. Thus, option B is the correct and only suitable answer.

Question 12:

A system engineer needs to increase the number of open file descriptors for a specific process named ingestor, which frequently reaches its limit and crashes. The engineer wants to permanently raise the soft limit to 65535 for this user-defined service. Which configuration file should be modified?

A) /etc/security/limits.conf
B) /etc/fstab
C) /etc/login.defs
D) /etc/hosts

Answer:

A

Explanation:

To permanently increase the number of open file descriptors for a specific user or service, the correct configuration file to modify is /etc/security/limits.conf. This file is part of the PAM limits module and controls system-wide or user-specific resource limits including open file descriptors (nofile), processes, core file size, and memory limits. The administrator can enter a line such as ingestor soft nofile 65535 to raise the soft limit for the process owner named ingestor. The limits.conf file provides granular control over resource constraints, allowing administrators to adjust soft and hard limits for individual users, groups, or all users. These settings are applied at login time or when a process starts through PAM, making them suitable for long-term system configuration.

Option B, /etc/fstab, is used for configuring filesystem mounts. It includes entries that define how disks, partitions, network filesystems, and other storage devices should be mounted automatically. It does not relate to process limits and cannot modify open file descriptor values. As such, it is not relevant to managing nofile limits.

Option C, /etc/login.defs, controls default policies for user account creation such as password aging, UID ranges, and home directory configuration. While login.defs defines aspects of system user management, it does not set or control ulimit values or file descriptor limits. Changes here would not affect process resource limits and therefore do not meet the requirement.

Option D, /etc/hosts, is used for hostname-to-IP address mappings. It has no relation to resource limits or process management. Editing the hosts file can help with local DNS resolution or hostname identification but cannot influence the maximum number of open file descriptors.

To permanently update file limits, the settings must be applied through /etc/security/limits.conf or through files in /etc/security/limits.d/. The limits.conf file allows two types of limits: soft and hard. The soft limit defines the effective limit applied when a process starts, while the hard limit defines the maximum allowed value a user can set. In the scenario described, increasing the soft limit ensures that the process begins with enough file descriptors to avoid crashes or resource exhaustion.

Administrators often adjust these values on systems running high-throughput applications such as database engines, log processors, web servers, and ingestion pipelines. Without setting appropriate limits, applications may hit default values (such as 1024), leading to failures when processing large numbers of files or network connections. Configuring the limits.conf file ensures reliable operation and predictability across user sessions, making it the correct answer.

Question 13:

A Linux administrator needs to rotate application logs stored in /var/myapp and keep weekly archives for 8 weeks. The logs must be compressed during rotation. Which configuration block is appropriate for placement inside a logrotate configuration file?

A) /var/myapp/.log { weekly rotate 8 compress }
B) /var/myapp/.log { daily rotate 8 nocompress }
C) /var/myapp/.log { monthly rotate 4 compress }
D) /var/myapp/.log { size=1M copytruncate }

Answer:

A

Explanation:

The correct configuration block for logrotate in this scenario is /var/myapp/*.log { weekly rotate 8 compress }, because it matches all specified requirements: weekly rotation, retention of eight archive files, and compression of rotated logs. Logrotate is a powerful tool for automatically managing log file sizes, retention, and compression. By specifying weekly, the administrator ensures that rotation happens once per week. The rotate 8 directive ensures that eight archived logs are kept before the oldest is deleted. The compress directive instructs logrotate to compress each rotated log using gzip by default. This configuration ensures that the current log remains active while older logs are efficiently stored and preserved.

Option B configures daily rotation instead of weekly and also uses nocompress, which contradicts the requirement. While daily rotation may be appropriate for rapidly changing logs, it does not match the administrator’s stated preference. Without compression, rotated logs consume more disk space and do not follow the requirement to compress older logs.

Option C sets monthly rotation and keeps only four archived logs. Both of these settings contradict the requirement for weekly rotation and eight-week retention. Monthly rotation is useful when logs are small or rarely change, but it is not appropriate for an application generating moderate-to-heavy output that requires weekly handling.

Option D specifies size-based rotation using size=1M and also uses copytruncate. Neither of these directives aligns with the requirements. Size-based rotation focuses on file size instead of time intervals. Copytruncate is used for applications that cannot reopen their log file after rotation, but it is not requested here. More importantly, this block does not specify weekly rotation, retention count, or compression, making it unsuitable.

Logrotate configurations follow a predictable syntax: a log file or wildcard pattern is listed, followed by a block of directives. Weekly specifies the rotation frequency, rotate N sets the number of archives to retain, and compress instructs logrotate to apply gzip compression to older logs. This makes the configuration in option A the only valid solution.

Managing logs effectively is essential for application performance, disk space management, and system reliability. Weekly rotation often strikes a balance between keeping logs manageable and ensuring meaningful data retention without filling disk space too rapidly. Compressing logs reduces storage usage, especially when logs contain repetitive text. Therefore, the provided block in option A is consistent with best practices and precisely satisfies the requirements.

Question 14:

A Linux administrator needs to determine which kernel modules are currently loaded into memory on a running system. Which command will display this information?

A) lsmod
B) modprobe -r
C) uname -a
D) depmod -a

Answer:

A

Explanation:

The correct command for displaying loaded kernel modules is lsmod. This command queries the /proc/modules virtual file and presents a list of modules currently loaded into the kernel. The output includes the module name, size, and the number of times it is used, which helps administrators understand dependencies and module activity. The lsmod utility is simple, reliable, and widely available across Linux distributions. It allows administrators to verify whether specific drivers, filesystem modules, or hardware-related components are currently active. This is useful when diagnosing hardware issues, configuring devices, or ensuring that modules required for certain features are loaded.

Option B, modprobe -r, removes a module from the kernel. Although modprobe is an essential tool for managing kernel modules, removing modules does not provide insight into which modules are currently loaded. The -r flag specifically unloads modules, so it cannot be used for listing purposes.

Option C, uname -a, displays kernel version information along with system architecture and host details. While uname helps identify kernel release numbers, it cannot list modules or provide any information about module activity. It is useful for confirming kernel versions but unrelated to module management.

Option D, depmod -a, builds or updates the module dependency map in /lib/modules/<kernel-version>. It scans module files and generates dependency information used by modprobe. This is typically run after installing new modules. However, depmod does not display which modules are currently loaded and instead prepares the system for modules to be loaded correctly in the future.

For administrators troubleshooting functionality related to network drivers, RAID controllers, filesystem support, or hardware detection, knowing which modules are loaded is essential. The lsmod command provides exactly this information in a clean and readable format. Therefore, it is the only correct answer.

Question 15:

A Linux engineer wants to identify which ports are currently being listened on by services, including both TCP and UDP, and display the corresponding processes. Which command provides the most comprehensive and real-time information?

A) ss -tulnp
B) lsof /etc/services
C) netstat -r
D) systemctl list-sockets

Answer:

A

Explanation:

The correct command for viewing listening ports along with associated processes is ss -tulnp. The ss utility is the modern replacement for netstat, providing faster and more detailed socket information. The combination of flags -tulnp instructs ss to show TCP (-t) and UDP (-u) sockets, listening ports (-l), numeric output (-n), and associated process information (-p). This provides a complete view of which services are listening on network ports, which processes own those ports, and whether they are using TCP or UDP. Because ss operates directly against kernel structures using netlink, it is more efficient and accurate than older tools.

Option B, lsof /etc/services, is incorrect because lsof examines open files, not network listeners, when used this way. The /etc/services file contains service-to-port mappings but does not reflect active processes or open ports. Running lsof on this file simply shows which processes have the file open, which is rarely useful.

Option C, netstat -r, displays routing tables, not listening ports. While netstat can show ports with netstat -tulnp, the -r flag specifically prints routing entries. Modern Linux environments use ss instead of netstat for performance and compatibility reasons.

Option D, systemctl list-sockets, lists systemd-managed sockets, but many network services do not use systemd socket activation. Daemons configured manually or started through init scripts may not appear. Therefore, systemctl list-sockets does not provide a complete picture of active listening ports.

Because ss -tulnp displays all listening ports, associated programs, TCP/UDP differentiation, and real-time socket information, it is the correct and only comprehensive solution.

Question 16:

A Linux administrator needs to create a new user account named qauser with a home directory, assign it to the supplementary group testers, and ensure the account is created with the default shell set to /bin/bash. Which command accomplishes all these requirements?

A) useradd -s /bin/bash qauser
B) useradd -m -s /bin/bash -G testers qauser
C) adduser qauser testers /bin/bash
D) usermod -m -G testers qauser

Answer:

B

Explanation:

The correct command is useradd -m -s /bin/bash -G testers qauser because it satisfies all conditions: creating the user, generating a home directory, assigning the account to the supplementary group testers, and setting the shell to /bin/bash. The useradd command is the standard tool across Linux distributions for creating new users. The -m flag instructs useradd to create the user’s home directory automatically. The -s option allows administrators to specify the login shell, ensuring the user receives the desired shell upon login. The -G option assigns the user to supplementary groups beyond the primary group that useradd generates by default. Therefore, this command fully meets the requirements.

Option A specifies the shell but fails to create a home directory and does not add the user to the testers group. Without -m, no home directory is created unless a system-wide default is configured, which should not be assumed. Therefore, option A cannot satisfy all the requirements.

Option C does not use correct syntax. The adduser command is available on some systems and does create users interactively, but it does not accept parameters in the order shown. Listing the group name and shell in this manner would cause the command to fail or behave unpredictably. Additionally, adduser’s syntax differs across distributions, making it unreliable for scripted or exact command usage.

Option D uses usermod, which modifies existing accounts rather than creating new ones. Because the account qauser does not yet exist, running usermod would fail. Additionally, usermod -m does not create a home directory in the same way as useradd and -m is generally used to move a home directory. Thus, option D is incorrect.

User creation is a regular administrative task on Linux systems, especially in managed environments with role-based access control. A correct user creation command ensures consistent account setup, secure directory permissions, and correct shell configuration. Supplementary groups help assign users to specific roles or functional areas without granting excessive privileges. The testers group could be related to QA testing roles requiring access to test tools or environments. Using a well-formed useradd command helps maintain clean system administration practices by ensuring accounts are created with uniform and predictable configurations. Therefore, option B is the correct choice.

Question 17:

A Linux server administrator wants to check the status of a software RAID array to determine if any disks are degraded or rebuilding. Which command provides detailed information about mdadm-managed RAID devices?

A) mdadm –detail /dev/md0
B) raidstatus /dev/md0
C) cat /proc/mounts
D) fdisk -l /dev/md0

Answer:

A

Explanation:

The correct command is mdadm –detail /dev/md0 because it provides comprehensive information about a RAID array, including device state, level, UUID, active and spare disks, rebuild status, failed devices, and sync progress. Mdadm is the standard management tool for software RAID on Linux. The –detail option produces a human-readable summary of the array and its member disks, displaying precise information about performance, chunk size, and resilience. This command is essential for administrators who monitor RAID health, diagnose storage issues, and verify redundancy operations.

Option B, raidstatus, is not a standard Linux utility and does not exist on most distributions. It is sometimes confused with third-party tools but is not part of mdadm or standard RAID management. Administrators must rely on mdadm for RAID monitoring rather than imaginary or deprecated tools.

Option C, cat /proc/mounts, provides information about mounted filesystems but does not show RAID-specific details. While RAID devices may appear in the list if mounted, cat /proc/mounts cannot reveal whether a device is degraded, syncing, or failed. Therefore, it cannot meet the requirement.

Option D, fdisk -l /dev/md0, displays partition information from the perspective of the RAID block device. However, RAID arrays often present a virtual device that does not show details about member disks or RAID status. Fdisk cannot identify active, failed, or rebuilding components and is unsuitable for RAID diagnostics.

RAID monitoring is crucial for ensuring data integrity and avoiding downtime. When a disk fails in RAID 1, RAID 5, or RAID 6 configurations, the array enters a degraded state and needs administrator attention. Mdadm provides the tooling needed to identify malfunctioning disks and track rebuilding progress. Administrators can automate monitoring through scripts or monitoring systems that parse the output of mdadm –detail for alerts. This makes option A the correct tool for RAID inspection.

Question 18:

A Linux administrator needs to determine which process is bound to port 8080 on a server where a web application fails to start due to a port conflict. Which command identifies the process occupying the port?

A) ps -ef | grep 8080
B) lsof -i :8080
C) netstat -r
D) systemctl list-units –all

Answer:

B

Explanation:

The correct command is lsof -i :8080 because lsof can list open files, and network sockets are treated as files on Linux. The -i option filters open files related to network connections, and specifying :8080 tells lsof to locate processes bound to port 8080, whether TCP or UDP. The output provides the process name, PID, user, and network protocol. This makes it highly effective for resolving port conflicts, which commonly occur when a web server, Java service, or container runtime occupies a port expected by another application.

Option A attempts to use ps combined with grep to locate processes containing the string 8080. However, this method is unreliable. A process might not contain the port number in its command-line arguments, and grep may produce false matches or miss processes entirely. Thus, ps -ef | grep 8080 is only useful in limited cases.

Option C, netstat -r, displays routing tables only. It cannot determine which process holds a port. While netstat can show open ports with different flags, netstat -r specifically does routing, making it irrelevant here.

Option D, systemctl list-units –all, displays systemd units but not listening ports. Although some services correspond to network daemons, systemctl cannot determine actual port usage or conflicts.

Port conflicts are a common issue in development and production environments. When an application attempts to bind to a port already in use, it fails to start, often producing errors such as address already in use. Identifying the culprit process is essential. Lsof excels in these scenarios because it directly queries kernel file tables. Administrators can terminate the conflicting process or reconfigure services as necessary. Thus, option B is correct.

Question 19:

A Linux engineer needs to view the last 50 lines of a log file and continue monitoring new log entries in real time as they are written. Which command achieves this behavior?

A) cat -n /var/log/messages
B) tail -n 50 -f /var/log/messages
C) head -50 /var/log/messages
D) less /var/log/messages

Answer:

B

Explanation:

The correct command is tail -n 50 -f /var/log/messages, which outputs the last 50 lines of the log file and then continues following the file for new entries using the -f option. The -n flag specifies how many lines to display initially. This combination is essential for real-time log monitoring, particularly during debugging, service restarts, or application testing. Tail is widely used by administrators for live observation because it provides instant feedback as log entries appear, helping diagnose issues quickly.

Option A, cat -n, prints the entire file with line numbers but does not follow the log. It is unsuitable for real-time monitoring and displays too much information, which can overwhelm the terminal and obscure relevant entries.

Option C, head, prints only the first lines of the file and does not monitor new content. It is used for inspecting header information or the start of a file but not for ongoing observation.

Option D, less, is interactive and allows searching, but does not automatically follow new log entries unless invoked as less +F, which is not shown here. Alone, less is not suited for continuous monitoring.

Real-time logging is crucial in troubleshooting situations such as web server issues, authentication failures, kernel messages, or application crashes. When administrators monitor a service restart, tail -f provides immediate feedback. For example, after restarting Apache, watching the access log and error log reveals whether requests succeed or if configuration errors appear. Cloud-native environments and DevOps workflows rely extensively on tail-style monitoring. Therefore, option B is correct.

Question 20:

A Linux system administrator wants to unmount a busy filesystem but receives an error indicating the device is in use. Which command helps identify which processes are using files or directories on the target mount point?

A) killall -9
B) df -h
C) fuser -m /mountpoint
D) pwd

Answer:

C

Explanation:

The correct command is fuser -m /mountpoint because fuser identifies processes using files, sockets, or directories, and the -m option treats the argument as a mounted filesystem. Fuser outputs a list of process IDs that are currently accessing the filesystem, allowing administrators to stop, kill, or examine those processes. This is essential when a filesystem must be unmounted safely. A busy filesystem cannot be unmounted until all file handles are closed, and fuser provides the necessary information to resolve this.

Option A, killall -9, aggressively terminates processes by name but cannot identify which processes are using the filesystem. Using killall blindly can cause unintended service outages.

Option B, df -h, displays disk usage and mount points but does not show process usage. It is useful for monitoring space but irrelevant to unlocking mounts.

Option D, pwd, returns the current working directory. If the current shell sits inside the mountpoint, the user cannot unmount it, but pwd does not identify other processes.

Unmounting busy filesystems is a common administrative challenge. Applications, terminals, background jobs, or user shells may hold references. Fuser solves this problem by enumerating processes with open file descriptors tied to the mount. Administrators can then gracefully close applications or forcefully terminate them depending on system policies. Thus, option C is correct.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!