CompTIA XK0-005 Linux+ Exam Dumps and Practice Test Questions Set 3 41-60

Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.

Question 41:

A Linux administrator needs to create a compressed backup of an entire directory named /opt/projects and store it as projects.tar.bz2 while preserving file permissions and ownership. Which command accomplishes this correctly?

A) tar -cvf projects.tar /opt/projects
B) tar -cjvf projects.tar.bz2 /opt/projects
C) gzip /opt/projects > projects.tar.bz2
D) zip -r projects.tar.bz2 /opt/projects

Answer:

B

Explanation:

The correct command is tar -cjvf projects.tar.bz2 /opt/projects because it creates a compressed tar archive using bzip2 compression while preserving file permissions, directory structure, and ownership. The c flag instructs tar to create a new archive, j applies bzip2 compression, v displays verbose output, f specifies the filename, and the final argument is the target directory. This ensures a fully functional backup suitable for restoration with preserved attributes important for Linux applications, scripts, and user access rights.

Option A would create an uncompressed tar archive, projects.tar, which does not match the requirement for a .tar.bz2 compressed file. Although tar preserves permissions and ownership by default, the lack of compression contradicts the administrator’s goal.

Option C incorrectly uses gzip directly on a directory. Gzip is only capable of compressing files, not directories. Even if gzip could operate recursively, it would not produce a .tar.bz2 file or preserve directory metadata in the structured way that tar does.

Option D uses the zip utility, which is not suitable for Linux backups requiring exact preservation of permissions, ownership, or special files such as symlinks or device files. The .zip format does not fully preserve Linux metadata. Furthermore, using zip to create a file named .tar.bz2 is misleading and incorrect.

Tar is the preferred tool for Linux backup operations because it can archive directories recursively while maintaining the exact environment needed for restoring an application. When backing up project directories, it is essential to preserve ownership so that deployment, runtime, or development tools continue functioning correctly. Bzip2 compression provides a good balance between compression ratio and resource usage, making it ideal for large directories.

The resulting file, projects.tar.bz2, can be re-extracted using tar -xjvf projects.tar.bz2, restoring everything accurately. Therefore, option B is the correct and only suitable command.

Question 42:

A system engineer needs to permanently configure the hostname of a Linux server named appserver01. Which method correctly updates the hostname so it persists across reboots?

A) echo appserver01 > /proc/sys/kernel/hostname
B) hostnamectl set-hostname appserver01
C) hostname appserver01
D) echo “HOSTNAME=appserver01” > /etc/environment

Answer:

B

Explanation:

The correct method to permanently set the hostname on a modern Linux system using systemd is hostnamectl set-hostname appserver01. The hostnamectl tool interacts with systemd-hostnamed, ensuring that the static hostname, pretty hostname, and transient hostname are updated consistently. This change persists across reboots and integrates with other systemd components. Using hostnamectl also updates /etc/hostname on most distributions, ensuring proper system identification for services, logs, networking, and monitoring agents.

Option A modifies /proc/sys/kernel/hostname, which only affects the runtime hostname. Since the /proc filesystem contains kernel runtime values, any change written there does not persist after reboot because this virtual filesystem is regenerated by the kernel during startup.

Option C uses the hostname command, which sets the hostname only temporarily. It does not modify persistent configuration files. After reboot, the system reverts to the hostname stored in configuration files, meaning this answer fails the requirement of permanence.

Option D incorrectly attempts to modify /etc/environment. This file is used for setting global environment variables, not hostnames. It will not change system identification and will not be read by hostname services.

Proper hostname configuration is important for log consistency, DNS integration, monitoring systems, and cluster orchestration tools. Without a persistent hostname, services such as SSH, systemd units, and system monitoring agents may behave inconsistently or fail authentication checks. hostnamectl ensures a clean, reliable configuration and is therefore the correct answer.

Question 43:

A Linux administrator needs to determine which systemd services failed during the boot process to troubleshoot startup issues. Which command displays failed services only?

A) systemctl list-units
B) systemctl status
C) journalctl -xe
D) systemctl –failed

Answer:

D

Explanation:

The correct command is systemctl –failed because it filters systemd units to display only those that have entered a failed state. This allows administrators to quickly pinpoint services that encountered errors, dependency failures, misconfigurations, or crashed during boot. The output lists the unit name, load state, active state, and description. Combined with systemctl status unitname or journalctl -u unitname, administrators can efficiently diagnose issues.

Option A, systemctl list-units, displays all units—active, inactive, loaded, or failed—resulting in a long list that must be manually searched. It does not filter by failure and thus is less efficient.

Option B, systemctl status, requires specifying a service or displays only a summary of the manager status. It does not give a filtered list of failed services.

Option C, journalctl -xe, shows detailed logs but does not isolate failed systemd units. It is helpful for diagnostics but not for generating a quick list of failed services.

System startup troubleshooting often requires identifying which services failed. Using systemctl –failed gives administrators the shortest path to determining misbehaving components. This is especially useful in environments where multiple services depend on each other and a single failure can cascade. The correct answer is D.

Question 44:

A Linux administrator needs to configure a new network interface using systemd-networkd. The interface should receive an IP address via DHCP. Which configuration file is needed?

A) /etc/systemd/network/10-dhcp.network
B) /etc/sysconfig/network-scripts/ifcfg-eth0
C) /etc/NetworkManager/system-connections/dhcp.nmconnection
D) /etc/netplan/01-netcfg.yaml

Answer:

A

Explanation:

The correct configuration file for systemd-networkd is /etc/systemd/network/10-dhcp.network. Systemd-networkd manages network interfaces using .network files, which contain key-value pairs specifying interface behavior. A minimal DHCP-based configuration looks like this:

[Match]
Name=eth0

[Network]
DHCP=yes

Placing this configuration in /etc/systemd/network/ with a .network extension enables systemd-networkd to assign a DHCP address at boot. The numbering prefix such as 10- ensures ordering precedence.

Option B corresponds to legacy network-scripts used on older Red Hat-based systems. These are not used by systemd-networkd.

Option C corresponds to NetworkManager’s configuration system and is not applicable here.

Option D corresponds to Netplan, used by Ubuntu-based distributions, not systemd-networkd.

Systemd-networkd is designed for lightweight, fast, and scriptable networking without the overhead of NetworkManager. It is ideal for servers, containers, and embedded systems. Because .network files are the required configuration method, option A is correct.

Question 45:

A Linux engineer needs to restrict a scheduled cron job so it can only run when the system load average is below a threshold. Which tool allows defining such conditions for periodic tasks?

A) crontab
B) systemd timers
C) at
D) nohup

Answer:

B

Explanation:

The correct answer is systemd timers because systemd provides conditional execution features such as load average control, network availability checks, CPU scheduling, and dependency-based activation. Unlike traditional cron, systemd timers can include condition directives and RateLimit settings. By using directives such as ConditionACPower, ConditionPathExists, and more relevantly CPUQuota or resource control through systemd slices, administrators can constrain timer jobs to run only under specific system conditions. Systemd also allows jobs to delay execution until load decreases using the RandomizedDelaySec and AccuracySec directives.

Option A, crontab, cannot evaluate system load. Cron schedules tasks blindly based on time, without checking resource conditions. Scripts could manually check load averages, but cron itself does not provide this functionality.

Option C, at, schedules one-time tasks for future execution but does not support load-based conditional scheduling. It is useful for ad-hoc tasks but not suitable for sophisticated conditional control.

Option D, nohup, detaches a command from the terminal and ensures it continues running after logout. It does not handle scheduling or load control.

Systemd timers provide a modern replacement for cron with more powerful scheduling semantics and dependency management. Administrators can place timer units in /etc/systemd/system/ and define matching service units for execution. Combined with systemd resource control features, timers allow safer automation in high-demand environments. Therefore, option B is the correct answer.

Question 46:

A Linux administrator needs to identify which system calls a specific process is making in real time to troubleshoot performance issues. The process ID is 2480. Which command will display the system calls made by this running process?

A) tcpdump -p 2480
B) strace -p 2480
C) ltrace -c 2480
D) ps -fp 2480

Answer:

B

Explanation:

The correct command is strace -p 2480 because strace attaches to an already-running process and displays its system calls in real time. This tool is widely used by Linux administrators, developers, and troubleshooters to understand how a program interacts with the kernel. System calls are the interface between user applications and the Linux kernel, handling important operations such as file I/O, networking, memory allocation, process control, and hardware access. When a process behaves unexpectedly or experiences performance issues, examining its system call activity is one of the most effective diagnostic methods.

Strace allows administrators to observe exactly which system calls are being executed and how frequently. If an application is stuck in a loop performing excessive read or write operations, strace reveals this immediately. Likewise, if an application is hanging while attempting a resource that is unavailable or blocked, strace will show stalled system calls such as futex, poll, recvfrom, or accept. This real-time visibility is invaluable for diagnosing problems that cannot be detected through logs alone.

Option A, tcpdump -p 2480, is incorrect. Tcpdump is a packet capture tool used for network analysis, but it does not attach to processes via PID. Instead, tcpdump captures network traffic at the interface level. Even if the target process is network-intensive, tcpdump cannot isolate traffic solely for a specific PID without additional tools or filtering methods external to tcpdump’s core functionality.

Option C, ltrace -c 2480, traces library calls (such as glibc functions) instead of system calls. While library calls can provide valuable insights into application behavior, they do not present a complete picture of kernel interactions. Furthermore, ltrace typically does not trace low-level system call activity, making it insufficient for kernel-level troubleshooting. The -c flag also performs call counting rather than real-time display, which does not satisfy the requirement to view system calls in real time.

Option D, ps -fp 2480, merely displays information about the process such as UID, PID, CPU, memory usage, and command line. Although useful for process inspection, ps cannot show system calls or dynamic syscall activity. It is not designed for detailed debugging.

Strace is extremely helpful in diagnosing issues such as file permission denials, network timeouts, segmentation faults, resource exhaustion, and configuration path problems. For example, running strace on an application that cannot locate a configuration file often reveals which directories it is searching. When troubleshooting performance problems, strace helps identify excessive calls to open files, repeated attempts to connect to unavailable sockets, or lock contention.

Administrators must run strace as root or with equivalent privileges to attach to processes owned by other users. The output may be verbose, but filtering options exist, such as restricting output to specific system calls or tracing only network-related calls. Because strace directly provides the kernel-level behavior required, it is the only correct answer.

Question 47:

A Linux administrator needs to configure an automated task that runs before any network becomes available, ensuring it executes during the early boot sequence. Using systemd, which directive in a systemd service unit file ensures execution before networking starts?

A) After=network.target
B) Before=network.target
C) Wants=network-online.target
D) Requires=network.target

Answer:

B

Explanation:

The correct directive is Before=network.target because it explicitly instructs systemd that the service must run before the network.target is reached in the boot sequence. Systemd uses ordering directives to manage startup order among units, ensuring dependencies run in the correct sequence. The Before= directive establishes that the specified unit should start earlier than the referenced unit, in this case networking. When an administrator needs a script or service to run prior to the network stack initialization, specifying Before=network.target in the service’s unit file ensures it runs at the appropriate phase of startup.

Option A, After=network.target, is the opposite of what is required. Instead of running before networking becomes available, After=network.target delays the service until networking is initialized. This directive ensures a service waits for the network environment, making it unsuitable for pre-network tasks.

Option C, Wants=network-online.target, expresses that the service prefers network-online.target to be started but does not enforce ordering. It does not guarantee pre-network execution and does not replace Before= for ordering constraints. Wants= is a requirement directive that influences dependency inclusion but does not influence sequence.

Option D, Requires=network.target, forces systemd to start network.target before the given service, again contradicting the requirement. This creates a strict dependency, ensuring the service cannot start unless networking is already initialized.

Systemd’s startup sequence is driven by both dependency relationships (Requires=, Wants=) and ordering relationships (Before=, After=). For tasks such as pre-network configuration, early logging, disk checks, or initial environment setup, administrators need precise control over sequence. Using Before=network.target signals systemd to initiate the custom unit early in the boot process.

Service unit files typically include:

[Unit]
Description=My Pre-network Service
Before=network.target

[Service]
ExecStart=/usr/local/bin/pre_network_script.sh

[Install]
WantedBy=multi-user.target

This configuration ensures the service executes before networking starts. Because only option B satisfies this requirement, it is the correct answer.

Question 48:

A Linux engineer needs to analyze memory usage and identify which process is currently consuming the highest amount of RAM on the system. Which command provides real-time memory usage information and allows sorting by memory consumption?

A) free -h
B) top
C) vmstat
D) uptime

Answer:

B

Explanation:

The correct answer is top because top provides an interactive, real-time display of running processes, including their memory and CPU usage. While top defaults to sorting processes by CPU consumption, pressing the M key within the interface sorts processes by memory usage, allowing administrators to quickly identify memory-heavy applications. Top also shows overall RAM and swap usage, load averages, and process counts, making it a comprehensive monitoring tool.

Option A, free -h, displays only system-wide memory usage, including total, used, free, and cached RAM. It cannot identify which process is using memory. While free is excellent for checking overall memory pressure, it lacks process-level granularity.

Option C, vmstat, provides periodic updates of system metrics including CPU, memory, processes, and I/O, but it is not process-level and does not identify specific memory-consuming processes.

Option D, uptime, displays load averages and system run time. It provides no memory information at all and cannot assist in identifying memory-heavy processes.

Top is one of the most common tools for diagnosing performance issues in Linux environments. Memory leaks, runaway applications, misconfigured Java processes, loaded services, or database instances can quickly consume available RAM. When RAM is exhausted, the system may resort to swap, dramatically slowing performance. In worst cases, the Out-of-Memory (OOM) killer may terminate processes.

Top allows administrators to monitor memory usage in real time and identify problematic processes. It also enables interaction such as sending signals, renicing processes, or filtering the view. Because of its versatility and widespread availability, top is the correct answer.

Question 49:

A Linux administrator is troubleshooting a DNS resolution issue and needs to query the authoritative nameservers for a domain and view the full DNS response details. Which command is most appropriate?

A) host example.com
B) dig example.com +trace
C) nslookup example.com
D) ping example.com

Answer:

B

Explanation:

The correct command is dig example.com +trace because the +trace option forces dig to perform iterative queries starting from the root servers, progressing through TLD servers, and ending at the authoritative nameservers. The output shows each step of the resolution process, including which DNS servers respond, referral chains, and authoritative answers. This is crucial when troubleshooting DNS propagation issues, caching problems, misconfigured name servers, or delegation errors.

Option A, host example.com, performs a simple DNS lookup but does not provide authority tracing or detailed query flow information. It is insufficient for deep diagnostic work.

Option C, nslookup example.com, provides basic DNS information but lacks full debugging output and iterative query tracing. It is considered deprecated in some distributions in favor of dig.

Option D, ping example.com, only tests network connectivity to the resolved IP address. It does not diagnose DNS issues and cannot display DNS query steps.

DNS troubleshooting often requires visibility into how a domain resolves through the hierarchical DNS system. +trace enables administrators to pinpoint resolution failures at specific DNS layers, making option B the correct answer.

Question 50:

A Linux administrator needs to manage and inspect container images and wants to list all locally stored Podman images. Which command provides this information?

A) podman show images
B) podman images
C) podman list
D) podman ps -a

Answer:

B

Explanation:

The correct answer is podman images because it lists all locally available container images along with their repository, tag, image ID, creation date, and size. Podman is an OCI-compliant container engine designed as a drop-in replacement for Docker, often used in rootless environments. Administrators frequently need to inspect stored images to manage disk usage, verify versions, perform cleanup, or identify outdated builds.

Option A, podman show images, is not a valid Podman command. Podman does not use the show keyword, so this option is incorrect.

Option C, podman list, does not exist as a standalone command. Podman uses specific subcommands such as images, ps, inspect, run, and pull.

Option D, podman ps -a, lists containers, not images. This includes running, exited, and created containers, but it does not show image information.

A typical Podman image listing looks like:

REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 9beeba249f3e 2 weeks ago 142MB
alpine 3.17 8f73f1c446c3 3 months ago 5MB

This allows administrators to identify unused or outdated images and remove them with podman rmi. Maintaining a clean image store is important for optimizing disk space and ensuring deployments use approved and updated images.

Because podman images is the exact command that lists local images, option B is correct.

Question 51:

A Linux administrator needs to verify which shared libraries a binary depends on before deploying it to a production environment. Which command will list the shared object dependencies for a given executable?

A) nm binaryfile
B) ldconfig -p binaryfile
C) ldd binaryfile
D) objcopy binaryfile

Answer:

C

Explanation:

The correct command is ldd binaryfile because it displays the shared libraries required by a dynamically linked executable. Shared libraries, also known as .so files, provide crucial functionality that executables load at runtime. When binaries are moved between machines, it is common for certain library versions to differ or be missing entirely. If required libraries are not installed, the application will fail to start, often producing errors indicating that particular library files cannot be found.

Using ldd allows administrators to verify exactly which libraries the binary loads and where they are located within the filesystem. The output includes the library filename, the resolved absolute path, and the memory address where the library will be mapped. This enables the administrator to confirm compatibility between systems, check for broken links, and identify missing dependencies before deployment.

Option A, nm binaryfile, lists symbol tables from an object file or executable. While this can show function names or global variables inside the binary, it does not list dynamic library dependencies. Therefore, it cannot satisfy the requirement.

Option B, ldconfig -p binaryfile, is incorrect because ldconfig’s -p option simply lists the current dynamic linker cache and does not analyze the dependencies of a specific binary. It does not take a binary as input, making it unsuitable for dependency inspection.

Option D, objcopy binaryfile, is used for copying and transforming binary object files. It might be useful for stripping symbols or relocating sections, but it has no ability to analyze required libraries. It is intended for development and does not fulfill the administrative requirement of dependency inspection.

Shared libraries are essential components of Linux systems. When administrators deploy compiled applications, especially those developed in-house or obtained in binary-only form, ensuring library compatibility is crucial. Many applications rely on specific versions of core libraries such as glibc, pthreads, and system-level encryption libraries. If the wrong versions exist, unpredictable behavior or outright failure may occur.

Using ldd helps administrators detect if a binary has been linked against non-standard library paths or custom-compiled versions of system libraries. This is particularly important when applications were built in development environments configured differently from production systems. Ldd also helps identify whether a binary is statically or dynamically linked. If statically linked, the binary will include its library code internally and will not display typical shared libraries during an ldd inspection.

Because ldd is the only command that directly displays the shared object dependencies of a binary, it is the correct answer.

Question 52:

A Linux engineer needs to configure a scheduled job using systemd timers so that a script named cleanup.sh runs every 15 minutes. Which timer configuration section defines the interval-based schedule?

A) [Install]
B) [Timer]
C) [Unit]
D) [Service]

Answer:

B

Explanation:

The correct section is the [Timer] section because systemd timers use this portion of the configuration file to define when a unit should run. Systemd timers work in two parts: the .service file, which defines what will run, and the .timer file, which defines when it will run. The .timer file includes scheduling directives such as OnUnitActiveSec, OnCalendar, and other timing options.

To run cleanup.sh every 15 minutes, an administrator would create a .timer file with a structure similar to the following:

[Timer]
OnUnitActiveSec=15min

This directive means that after a unit finishes running, the timer waits fifteen minutes before triggering the next run. The systemd timer framework offers more control than traditional cron scheduling. It supports precise timing, delayed starts, random jitter, monotonic timers, and logging integration. All of these timing-related features must be placed in the [Timer] section.

Option A, the [Install] section, determines how the timer integrates into system boot processes. It uses directives such as WantedBy to ensure timers activate automatically after system startup. However, it does not define the schedule itself.

Option C, the [Unit] section, is used for descriptions, dependencies, and metadata. It does not provide scheduling information and is not used to set intervals.

Option D, the [Service] section, defines the executable action associated with the timer. It contains directives like ExecStart, determining what script or command runs. Although important, this section does not control timing.

System administrators use systemd timers for recurring tasks such as cleanup routines, log rotation, synchronization jobs, automated backups, and application maintenance scripts. They offer advantages over cron such as alignment with systemd logging, predictable timing behavior, improved error tracking, and easier integration with service dependencies.

Because interval definitions belong solely in the [Timer] section, option B is correct.

Question 53:

A Linux administrator needs to clone a Git repository using SSH authentication and must ensure the SSH key is used automatically. Which command correctly clones the repository using an SSH-based Git URL?

A) git clone example-repo
B) git clone ssh://user@server/repo
C) git clone git://server/repo
D) git clone ftp://server/repo

Answer:

B

Explanation:

The correct command is git clone ssh://user@server/repo because SSH-based Git URLs automatically use SSH keys for authentication. SSH authentication is widely used for secure access to private repositories because it eliminates the need to enter passwords or personal tokens. SSH keys are stored in the user’s .ssh directory, and Git integrates with the system’s SSH agent to perform authentication.

Option A, git clone example-repo, does not specify a protocol. Without a clear protocol, Git may attempt to use local paths or default behaviors, which cannot guarantee SSH authentication.

Option C uses the git protocol, which does not support authentication and is typically read-only. It does not use encryption, nor does it make use of SSH keys. This protocol is also considered outdated and is not suitable for secure or private repositories.

Option D attempts to use FTP, which Git does not support as a transport mechanism. Git cannot clone repositories over FTP under any circumstances.

SSH-based cloning provides encrypted communication and strong authentication. It is suitable for deployment environments, development teams handling private code, CI pipelines, and secure infrastructure repositories. SSH URLs clearly instruct Git to use the SSH transport layer and the associated key-based authentication. Therefore, option B is correct.

Question 54:

A Linux administrator needs to generate a private key and a Certificate Signing Request (CSR) for a web server. Which OpenSSL command correctly generates both in one step?

A) openssl verify -keyout server.key -out server.csr
B) openssl genrsa -out server.csr 2048
C) openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr
D) openssl cert -create -key server.key -csr server.csr

Answer:

C

Explanation:

The correct command is openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr because it generates both a new RSA private key and a Certificate Signing Request in a single step. The req subcommand is responsible for generating CSRs, and the -newkey rsa:2048 option instructs OpenSSL to produce a fresh private key of a specified size. The -nodes option tells OpenSSL not to encrypt the key with a passphrase. This is necessary for automated servers that must start without requiring human input.

Option A attempts to use the verify subcommand, which checks certificates but does not generate keys or CSRs. It cannot fulfill the requirement.

Option B incorrectly attempts to generate a key and output it as a CSR file. The genrsa command can only create private keys and cannot create CSR files. It also misuses the output filename.

Option D uses an invalid OpenSSL command. There is no cert subcommand that creates CSRs. The syntax does not correspond to any legitimate OpenSSL functionality.

Generating a CSR is an essential step when obtaining SSL certificates from Certificate Authorities. The CSR includes the public key along with identifying information such as Common Name, Organization, and Country. The private key created alongside it becomes the key that the web server uses to enable encrypted communication.

Thus, option C is the correct and complete command for generating both the private key and CSR.

Question 55:

A Linux system engineer needs to determine which user executed a specific sudo command earlier in the day. Which log file contains detailed records of sudo activity?

A) /var/log/messages
B) /var/log/sudo.log
C) /var/log/secure
D) /var/log/auth.log

Answer:

C

Explanation:

On many Linux server distributions, especially those that store authentication and security activity in a unified log, sudo events are recorded in the file located at /var/log/secure. This file logs security-related messages, including privilege elevation, successful and failed login attempts, changes to user identities, authentication checks, and sudo command usage. When an engineer needs to see which user executed a sudo command, this log contains time-stamped entries describing who issued the command, what command was run, and whether authentication succeeded.

Option A, /var/log/messages, stores general system information. Some services write informational messages there, but it is not specifically dedicated to authentication events. While it may contain occasional sudo-related lines depending on system configuration, it is not the designated file for sudo audit data.

Option B, /var/log/sudo.log, might appear to be correct based on name alone, but this file does not exist by default in most Linux systems. Only customized sudo configurations produce such a file, and even then it is not typical for administrators to rely on it.

Option D, /var/log/auth.log, is commonly used on certain distributions to store authentication events, but it is not universal. However, the question requires identifying the file containing sudo events on the systems that generally store authentication logs in the secure file. The correct and broadly accepted location on such systems is /var/log/secure.

Logs in /var/log/secure include entries such as authentication failures, session openings, su attempts, and sudo command invocations. These logs are essential for security auditing, compliance checks, and troubleshooting privileged access issues. When reviewing the actions of administrative users or responding to possible security incidents, the sudo records provide a clear trace of elevated operations.

Because /var/log/secure is the file that reliably captures sudo usage on such systems, option C is the correct answer.

Question 56:

A Linux administrator needs to view hardware information about the system, including CPU details, motherboard model, BIOS version, and other low-level hardware parameters. Which command provides this comprehensive hardware overview?

A) lspci
B) dmidecode
C) lsmod
D) uname -a

Answer:

B

Explanation:

The correct command is dmidecode because it provides a detailed overview of system hardware information retrieved from the system’s Desktop Management Interface tables. This includes BIOS version, system manufacturer, product name, motherboard model, serial number, memory slot details, processor information, chassis data, and other hardware descriptors. It is one of the most thorough tools for retrieving hardware-level metadata because it reads information exposed by firmware rather than relying on the operating system’s hardware detection mechanisms.

When administrators troubleshoot hardware issues, such as identifying incompatible memory modules, firmware anomalies, or mismatched motherboard revisions, they require precise information directly from system firmware. The dmidecode command outputs information organized into structured sections such as BIOS Information, System Information, Processor Information, Memory Device Information, and many others. This makes it exceptionally valuable when verifying hardware inventory or diagnosing hardware discrepancies.

Option A, lspci, lists PCI devices such as network cards, graphics controllers, storage controllers, and expansion cards. While useful for device enumeration, it does not display BIOS details or motherboard metadata. It focuses entirely on PCI bus devices and excludes many categories of system information.

Option C, lsmod, displays loaded kernel modules. While this helps administrators troubleshoot driver-related issues, it provides no hardware inventory information. It cannot reveal motherboard details, BIOS information, or system identification codes.

Option D, uname -a, displays kernel version, machine architecture, and hostname. It gives an overview of the running kernel environment, not physical hardware specifications. Uname does not access firmware-level hardware details.

The dmidecode command is often used in datacenters and enterprise environments where specific hardware attributes are critical for compatibility and support. For example, identifying BIOS versions helps determine whether firmware upgrades are required for stability, security, or feature support. Memory slot inspection allows administrators to confirm whether memory modules are properly recognized or if empty slots can be utilized. System identification fields may be used for asset tracking or warranty validation.

Another advantage of dmidecode is that it works consistently across many Linux distributions and retrieves hardware metadata regardless of distribution-specific tools. Because the information originates from firmware tables, it provides a more accurate representation of hardware configuration than software-level detection tools.

For these reasons, dmidecode is the correct and most comprehensive command for accessing detailed system hardware information.

Question 57:

A Linux administrator needs to track file access operations in real time to determine which process is interacting with sensitive configuration files inside the /etc directory. Which command can monitor file activity as it occurs?

A) at
B) inotifywait -m /etc
C) mount -o remount,rw /etc
D) type /etc

Answer:

B

Explanation:

The correct command is inotifywait -m /etc because it monitors file access events in real time. The inotifywait tool, part of the inotify suite, uses kernel-based notification mechanisms to track file system activity such as file reads, writes, attribute changes, deletions, creations, and directory events. The -m option enables monitoring mode, keeping the command running continuously so that administrators can observe ongoing file operations without interruption.

Monitoring file activity is essential for troubleshooting configuration issues, identifying unauthorized access, detecting unexpected application behavior, or observing real-time system operations. Sensitive directories such as /etc contain key configuration files controlling system services, authentication, networking, databases, and security policies. If services behave unpredictably or configuration files change unexpectedly, tracking file events can reveal which process or user is responsible.

Option A, at, schedules a command for future execution but cannot monitor file system activity. It has no capability to track read or write operations.

Option C, mount -o remount,rw /etc, remounts a filesystem in read-write mode but does not provide any monitoring functionality. It simply adjusts mount options and plays no role in tracking interactions.

Option D, type /etc, is invalid because the type command identifies shell builtin commands or commands in the search path. It does not operate on directories or monitor file activity.

Inotify-based tools provide immediate event-driven notifications, which differ significantly from periodic scanning methods. This real-time behavior allows administrators to catch sudden changes that might otherwise go unnoticed. For example, an unexpected modification of configuration files at random times could indicate misconfigured automation, buggy software, or even malicious activity. Using inotifywait helps pinpoint responsible processes by watching the exact moment files are accessed or altered.

Administrators also use inotifywait to debug service startup issues by monitoring which configuration files are touched during initialization. It can help identify missing dependency files or confirm whether a service is reading the correct configuration path. With the -m option, the administrator maintains a continuous view until manually stopping the command.

Because inotifywait -m /etc directly monitors real-time file system interactions within the specified directory, it is the correct choice.

Question 58:

A Linux server administrator needs to restrict a user’s ability to run certain commands while allowing them limited administrative access. Which file should be modified to define granular permissions for sudo command usage?

A) /etc/group
B) /etc/passwd
C) /etc/sudoers
D) /etc/profile

Answer:

C

Explanation:

The correct file is /etc/sudoers because it defines which users or groups may run specific commands with elevated privileges. The sudoers configuration allows administrators to specify granular command permissions, control user privileges, assign host-based restrictions, enforce password rules, and configure command logging. It is the central mechanism for delegating administrative rights in a controlled and secure manner.

Proper use of the sudoers file ensures that users receive only the minimum privileges necessary for their roles. For instance, a developer may be allowed to restart a specific service but not modify system files. Another user may be allowed to run package updates but forbidden from altering firewall rules. These fine-grained controls make sudoers a fundamental part of security policy on Linux systems.

Option A, /etc/group, defines group memberships and collective access rights but cannot specify command-level permissions. Groups can be used within sudoers, but group files alone cannot restrict specific administrative capabilities.

Option B, /etc/passwd, stores user account information such as UID, GID, home directory, and default shell. It is not used for privilege delegation and does not control access to administrative commands.

Option D, /etc/profile, configures system-wide shell environment settings, such as environment variables and PATH entries. While it affects user shells, it is unrelated to command authorization or privilege delegation.

The sudoers file must be edited carefully using the visudo command. Visudo performs syntax checking to prevent configuration errors that could lock administrators out of the system. The sudoers file contains powerful rule definitions using syntax such as:

username ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart apache

Such rules specify exactly which commands can be executed and under what conditions. Using these mechanisms, administrators can tailor security policies for different departments, roles, and operational needs.

Because /etc/sudoers is the authoritative file for defining sudo permissions and command-level access control, it is the correct answer.

Question 59:

A Linux engineer needs to view the current kernel ring buffer messages, including hardware initialization details and kernel-level warnings. Which command displays this information?

A) logger
B) journalctl -f
C) dmesg
D) tail -n 100 /var/log/messages

Answer:

C

Explanation:

The correct command is dmesg because it displays messages from the kernel ring buffer. The kernel ring buffer contains messages generated by the Linux kernel during boot and runtime. These include hardware detection events, driver initialization messages, kernel warnings, memory allocation reports, device enumeration, module loading data, and low-level diagnostics. The dmesg command prints the contents of this buffer, allowing administrators to inspect system behavior at a fundamental level.

Kernel messages are essential for identifying hardware-related problems such as failing disks, USB issues, driver conflicts, network interface initialization failures, memory errors, or kernel module faults. During troubleshooting, dmesg helps administrators determine whether the kernel recognized certain devices, whether drivers loaded correctly, or whether the system encountered warnings or errors at runtime.

Option A, logger, sends messages to the system log but does not read kernel messages. It is used for writing logs, not displaying the kernel buffer.

Option B, journalctl -f, follows systemd journal logs in real time. Although journalctl can include kernel messages, the question specifically asks to view the kernel ring buffer. Journal logs may contain additional unrelated entries, while dmesg focuses solely on kernel-level information.

Option D, tail -n 100 /var/log/messages, reads general system logs but does not guarantee accurate kernel ring buffer information. Many distributions separate kernel logs into different files or channels. It cannot replace dmesg for a direct kernel buffer view.

Dmesg is particularly helpful during hardware upgrades, kernel updates, and troubleshooting unexpected device behavior. It provides chronological, raw output from the kernel without filtering. Administrators often filter dmesg output using tools such as grep to search for key terms like error, fail, or warning.

Because dmesg directly accesses and displays the kernel ring buffer, it is the correct answer.

Question 60:

A Linux administrator needs to locate a missing configuration file within the filesystem by searching for its exact name, config.ini. Which command performs this task efficiently by scanning directories starting at the root filesystem?

A) whereis config.ini
B) locate config.ini
C) find / -name config.ini
D) ls -R / config.ini

Answer:

C

Explanation:

The correct command is find / -name config.ini because it searches the entire filesystem hierarchy starting at the root directory for a file matching the given name. The find utility performs a real-time directory traversal, inspecting each directory and subdirectory for matches. This is particularly useful when a file may reside in non-standard locations, when software packages place configuration files in unexpected paths, or when administrators need certainty about a file’s actual location.

Option A, whereis config.ini, is designed primarily for locating binary executables, source files, and man pages. It does not perform a full filesystem search and cannot reliably locate arbitrary configuration files.

Option B, locate config.ini, relies on a prebuilt database that is updated periodically. If the database has not been updated recently, locate may return outdated or invalid paths. It also may miss newly created files until the next database refresh.

Option D, ls -R / config.ini, attempts a recursive listing but does not perform name-based searching. It is inefficient, produces large amounts of output, and does not highlight file matches.

Find offers precise control over search criteria, allowing administrators to search by filename, type, permissions, size, modification time, and more. Searching for configuration files is a common administrative task, and find provides accuracy regardless of database freshness or command limitations. Because find / -name config.ini directly performs the required search, it is the correct answer.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!