Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.
Question 81:
A Linux administrator needs to find out which process is using a specific TCP port that appears to be causing a conflict with another service. Which command will identify the process ID associated with that port?
A) ps aux
B) ss -tulnp
C) arp -a
D) dmidecode
Answer:
B
Explanation:
The correct command is ss -tulnp because it displays socket statistics and shows which processes are listening on or using specific ports. The ss utility is a modern replacement for older networking tools and provides faster, more detailed socket information. The -tulnp flag combination displays TCP and UDP sockets (-tu), shows listening and established connections (-l), and displays the associated process names and PIDs (-p).
Option A, ps aux, lists running processes but does not display network port usage or socket bindings. Without correlation to ports, it cannot identify which process is using the problematic port.
Option C, arp -a, displays address resolution protocol tables, revealing IP-to-MAC mappings. It has nothing to do with identifying which process is using a port.
Option D, dmidecode, reads hardware information from system firmware and is unrelated to networking or process activity.
When administrators troubleshoot port conflicts, such as when a web server, database, or application fails to start because the required port is already in use, identifying the process occupying the port is crucial. The ss -tulnp command provides lines of output that clearly indicate the local address, port number, and PID/program name.
For example, an entry might show:
tcp LISTEN 0 128 0.0.0.0:8080 users:((“myapp”,pid=1256,fd=3))
This tells the administrator exactly which process is bound to port 8080. With this information, they can decide to stop the process, reconfigure the service to use another port, or adjust firewall settings.
Ss is faster and more accurate than older tools because it interacts directly with kernel socket tables. It is particularly useful during high-load situations where rapid updates are necessary.
Because ss -tulnp provides precise, process-level port usage data, it is the correct answer.
Question 82:
A Linux engineer needs to set a static IP address on a network interface using NetworkManager command-line tools. Which command applies a static IPv4 address to an existing connection profile?
A) nmcli con mod myprofile ipv4.addresses 192.168.10.20/24
B) ip addr add 192.168.10.20/24 dev eth0
C) ifconfig eth0 192.168.10.20 netmask 255.255.255.0
D) nmcli device wifi connect myprofile
Answer:
A
Explanation:
The correct command is nmcli con mod myprofile ipv4.addresses 192.168.10.20/24 because it uses NetworkManager’s command-line interface to modify an existing connection profile’s IPv4 configuration. NetworkManager manages network interfaces on many modern Linux systems, and nmcli allows administrators to set permanent IP addresses, gateways, DNS servers, and routing rules.
Option B, ip addr add, sets an IP address temporarily. The change is not persistent and will be lost after a reboot or NetworkManager restart.
Option C, ifconfig, is an older tool considered deprecated on many systems. Its changes are temporary unless manually scripted and maintained outside standard network management frameworks.
Option D, nmcli device wifi connect myprofile, is used for connecting to wireless networks, not modifying IP settings for a network interface.
Configuring IPv4 addresses through NetworkManager ensures consistent and predictable behavior. The administrator can specify not only the address but also gateway, DNS, and method. After modification, they can activate the profile using:
nmcli con up myprofile
NetworkManager stores these settings in its configuration files, ensuring the static IP persists across boots.
Administrators often need static IP assignments for servers, virtual machines, and devices that require stable addressing for remote access, routing, firewalls, or load balancers. By applying configuration through nmcli, they ensure compatibility with other NetworkManager features such as automatic route management, connection monitoring, and failover.
Because nmcli con mod correctly applies a persistent IPv4 address to an existing profile, option A is correct.
Question 83:
A Linux administrator needs to extract unique lines from a large log file to eliminate duplicates before processing the data further. Which command performs this operation efficiently?
A) sort logfile
B) uniq logfile
C) sort logfile | uniq
D) rm logfile
Answer:
C
Explanation:
The correct answer is sort logfile | uniq because uniq only removes duplicate lines when they appear consecutively. If the administrator runs uniq by itself on an unsorted file, duplicate entries scattered throughout the file will not be matched. Combining sort with uniq ensures that identical lines are grouped together before uniq processes them.
Option A, sort logfile, only sorts the file and does not remove duplicate entries.
Option B, uniq logfile, removes duplicates only if they appear immediately next to each other, so it will miss many duplicates in a typical unsorted log file.
Option D, rm logfile, deletes the file entirely and does not contribute to processing or deduplication.
Large log files often contain repeated error messages, repetitive access entries, or recurring event lines. Administrators may need to analyze only unique lines to identify patterns, reduce noise, and focus on core issues. Using sort ensures identical entries are aligned, and uniq then suppresses duplicates, producing a clean list of distinct entries.
This pipeline is widely used for log analysis, data cleanup, and scripting workflows, making sort logfile | uniq the correct and efficient approach.
Question 84:
A Linux engineer needs to schedule a one-time system shutdown for 30 minutes in the future. Which command accomplishes this?
A) shutdown now
B) shutdown -h +30
C) reboot -f
D) poweroff -i
Answer:
B
Explanation:
The correct command is shutdown -h +30 because it instructs the system to halt thirty minutes from the moment the command is issued. The -h flag tells the system to halt, and the +30 parameter specifies the delay in minutes. This type of scheduled shutdown is useful for maintenance windows, controlled downtime, or preparing users for upcoming interruptions.
Option A, shutdown now, performs an immediate shutdown without delay.
Option C, reboot -f, forces an immediate reboot and ignores graceful shutdown steps.
Option D, poweroff -i, powers off the system immediately, ignoring scheduled timing requirements.
Using shutdown -h +30 also broadcasts a system message to all logged-in users, informing them of the scheduled halt, unless suppressed. This helps administrators notify active users and avoid unexpected interruptions.
Scheduling shutdowns helps ensure that running processes complete safely, users save their work, and critical jobs finish. Some systems may use shutdown procedures in automated scripts, monitoring systems, or planned operational tasks.
Because shutdown -h +30 delays the shutdown by thirty minutes exactly as required, option B is correct.
Question 85:
A Linux administrator needs to view the permissions of a directory, including whether it has special bits such as setuid, setgid, or the sticky bit enabled. Which command provides a detailed listing of these permissions?
A) chmod directory
B) ls -ld directory
C) chattr directory
D) readlink directory
Answer:
B
Explanation:
The correct command is ls -ld directory because it displays the full permission string for the directory, including ownership, permission bits, and any special mode bits such as setuid, setgid, or the sticky bit. The -l flag provides a long listing format, and the -d flag ensures that the details of the directory itself are shown rather than its contents.
Option A, chmod directory, modifies permissions but does not display them.
Option C, chattr directory, shows or changes filesystem attributes but not standard permission bits.
Option D, readlink directory, displays symbolic link targets and is irrelevant for checking directory permissions.
A long listing from ls -ld might show something like:
drwxrwsr-x
This indicates the setgid bit is applied. Another example:
drwxrwxrwt
This indicates the sticky bit, commonly used in shared directories like temporary storage locations.
Directory permissions determine who can list contents, create files, delete files, or traverse the directory. Special bits add additional behavior:
setuid on executables allows programs to run with the owner’s privileges
setgid on directories causes new files to inherit the directory’s group
sticky bit prevents users from deleting others’ files in shared directories
Administrators must understand these bits to maintain correct security behavior. Viewing them with ls -ld ensures full visibility into how the directory is configured.
Because ls -ld directory provides all required permission information, option B is the correct answer.
Question 86:
A Linux administrator needs to identify which shared libraries a running process has loaded into memory in order to troubleshoot compatibility issues. Which command will show all shared object files mapped into a process’s address space?
A) ldd
B) file
C) cat /proc/PID/maps
D) uname -r
Answer:
C
Explanation:
The correct command is cat /proc/PID/maps because it displays the memory mappings for a running process, including all shared libraries, anonymous memory regions, executable segments, stack space, and other mapped files. This file is part of the proc filesystem, which provides dynamic information about running processes. When administrators need to diagnose runtime behavior, especially issues linked to library mismatches, unexpected shared object versions, or dynamic linking problems, inspecting the memory mapping is essential.
Option A, ldd, shows the libraries an executable is linked against, but it does not show which libraries are actually loaded at runtime. Dynamic loading may occur after program start, meaning ldd cannot reflect real-time memory usage. Ldd is a static analysis tool and cannot capture runtime behavior.
Option B, file, identifies the type of a file but cannot show memory mappings or dynamic libraries.
Option D, uname -r, displays the kernel version and has no connection to shared library analysis.
The /proc/PID/maps file includes entries showing memory regions, permissions (read, write, execute), associated file paths, and address ranges. This is extremely useful when troubleshooting issues such as:
applications loading the wrong version of a library
memory corruption affecting specific segments
debugging crashes caused by library conflicts
determining whether optional components have been loaded
inspecting runtime linking behavior that differs from static linking
confirming security-hardening settings such as non-executable memory regions
Each line in /proc/PID/maps displays:
starting and ending addresses
permission bits
offset
device identifier
inode number
mapped file path (if any)
Administrators often investigate this file when dealing with complex applications, database engines, or custom-compiled software that relies heavily on dynamic loading. Because this output reflects the state of the process at the exact moment the command is executed, it provides a highly accurate and detailed view.
This differs from ldd because runtime loading mechanisms, such as dlopen, may bring in additional libraries long after the binary starts. Only /proc/PID/maps captures these live states.
Because cat /proc/PID/maps shows all memory mappings, including shared libraries currently loaded, it is the correct answer.
Question 87:
A Linux engineer notices a process consuming excessive CPU resources and wants to lower its scheduling priority without stopping it. Which command adjusts the nice value of an already running process?
A) nice
B) renice
C) chmod
D) sysctl
Answer:
B
Explanation:
The correct command is renice because it modifies the scheduling priority of a running process by adjusting its nice value. The Linux scheduler uses nice values to determine how much CPU time to allocate to processes. Higher nice values mean the process receives less CPU time, making it ideal for deprioritizing tasks that consume excessive resources.
Option A, nice, sets the priority at process startup but cannot modify the priority of an already running process. It must be used before the process starts.
Option C, chmod, affects file permissions and has nothing to do with scheduling.
Option D, sysctl, manages kernel parameters, not process-level CPU priorities.
Renice operates by specifying a new nice value and the PID of the process. For example:
renice 15 -p 1234
This raises the nice value to 15, giving the process lower priority. A lower nice value increases priority but requires elevated privileges. Administrators often renice background tasks like indexing services, bulk compression jobs, large data processing tasks, or debug processes so they do not overwhelm system resources.
Renice helps maintain system stability by preventing runaway processes from starving other tasks. When high CPU usage causes sluggish response times, renice provides a safe, non-destructive way to manage workload distribution without terminating processes. Because renice is the only tool specifically designed to change the priority of running processes, option B is correct.
Question 88:
A Linux administrator needs to extract only the first column of a space-delimited file to process user account information. Which command performs this extraction?
A) head -1 file
B) awk ‘{print $1}’ file
C) tail -f file
D) echo $1 file
Answer:
B
Explanation:
The correct command is awk ‘{print $1}’ file because awk is designed for field-based processing of text files. The $1 variable represents the first field in a line, assuming whitespace as the default delimiter. This makes awk ideal for extracting specific columns from structured or semi-structured files.
Option A, head -1 file, extracts the first line, not the first column.
Option C, tail -f file, prints new content added to the file and is used for monitoring logs, not extracting fields.
Option D, echo $1 file, prints shell variables and does not operate on file content.
Awk automatically splits input lines into fields based on spaces or tabs. This makes it a powerful tool for processing system files such as:
/etc/passwd
log files
tabular exports
command output piped through scripts
system reports
Administrators use awk frequently when parsing reports, extracting username fields, retrieving process IDs, filtering formatted command output, or preparing data for scripts. Because awk ‘{print $1}’ file correctly isolates the first field, option B is correct.
Question 89:
A Linux administrator needs to create a compressed archive while preserving directory structure and file permissions. Which command creates a gzip-compressed tar archive named archive.tar.gz from the contents of /data/archive?
A) tar -xvzf archive.tar.gz /data/archive
B) tar -czvf archive.tar.gz /data/archive
C) gzip -r /data/archive
D) zip -r archive.tar.gz /data/archive
Answer:
B
Explanation:
The correct command is tar -czvf archive.tar.gz /data/archive because this creates a compressed archive using tar and gzip while preserving permissions, directory structure, symlinks, and file attributes. The -c option creates an archive, -z compresses it with gzip, -v shows progress, and -f specifies the filename.
Option A extracts an archive, not creates one.
Option C compresses files individually with gzip but does not bundle them into a single archive or preserve directory structure.
Option D uses the zip utility, which does not produce tar.gz files and may treat permissions differently.
Tar archives are widely used for backups, transfer, and packaging because they retain full metadata, making them ideal for preserving system structures. Because tar -czvf produces the exact format required, option B is correct.
Question 90:
A Linux engineer needs to determine how long the system has been running since the last reboot. Which command displays system uptime in a human-readable format?
A) time
B) date
C) uptime
D) sleep 1
Answer:
C
Explanation:
The correct command is uptime because it displays how long the system has been running, how many users are logged in, and the system load averages. Uptime helps administrators determine stability, monitor reboot frequency, and verify maintenance schedules.
Option A, time, measures execution time of commands but not system uptime.
Option B, date, shows the current time and date but not uptime.
Option D, sleep 1, pauses execution and has nothing to do with uptime.
Administrators use uptime when auditing system reliability, verifying kernel updates, monitoring long-running servers, or checking whether unexpected reboots occurred. The output typically shows days, hours, and minutes, along with load averages that indicate how busy the system has been over time.
Because uptime directly reports the duration since the last boot, option C is correct.
Question 91:
A Linux administrator needs to view all running processes in a hierarchical, tree-style format to understand parent-child relationships between system services. Which command provides this structured display?
A) ps -a
B) pstree
C) kill -l
D) jobs -l
Answer:
B
Explanation:
The correct command is pstree because it displays running processes in a hierarchical tree structure, showing which processes are parents and which are children. This is essential for understanding how the system organizes running tasks, how services spawn subprocesses, and how daemons interact with helper programs or background jobs.
Option A, ps -a, lists processes but does not show parent-child relationships in a visual or structured manner. Ps provides excellent detail but lacks the integrated tree structure needed for relationship analysis.
Option C, kill -l, displays a list of available signals but does not show process structure or hierarchy.
Option D, jobs -l, shows only background jobs in the current shell session and cannot represent system-wide process relationships.
Pstree is invaluable when administrators diagnose problems involving processes that spawn additional tasks. For example, a web server may spawn worker processes, a database server may fork multiple background threads, or an automation task may launch helper scripts. When troubleshooting issues such as excessive resource consumption, hung processes, or runaway subprocesses, pstree helps identify which root process created the problematic children.
The tree structure reveals connections such as:
init or systemd at the root
service processes branching below
child processes related to specific tasks
If a process is consuming excessive CPU or memory, pstree helps determine whether the root cause is the main parent or a misbehaving child. This is particularly helpful when diagnosing:
service loops where processes spawn recursively
zombie processes that remain due to uncollected children
daemon failures where helper processes remain active
orphaned tasks re-adopted by systemd
The visual representation makes pstree easy to interpret, even when dealing with hundreds of active processes. Administrators can also use additional options such as:
pstree -p (display PIDs)
pstree -u (show user ownership)
pstree -a (show full command lines)
Because pstree uniquely provides a hierarchical display of processes, it is the correct answer.
Question 92:
A Linux administrator wants to ensure that a script executes every time the system boots, before any user logs in. Which location is appropriate for placing a system-wide startup script on a system using systemd?
A) /etc/rc.local
B) /etc/init/startup
C) /etc/systemd/system
D) /etc/bashrc
Answer:
C
Explanation:
The correct location is /etc/systemd/system because systemd uses unit files stored in this directory to define services, timers, and startup routines. When administrators want a script to execute at every boot, they create a custom service file in this directory and configure systemd to start it automatically.
Option A, /etc/rc.local, was used by older init-based systems. While some systems still support it for compatibility, it is not guaranteed to be enabled or available. Modern Linux distributions using systemd require systemd units instead of rc scripts.
Option B, /etc/init/startup, resembles paths used by older Upstart-based systems, which are no longer standard on most major distributions.
Option D, /etc/bashrc, configures shell behavior for interactive sessions and has nothing to do with system startup or running scripts before login.
Systemd unit files allow administrators to define:
when a script runs
dependencies it requires
whether it should restart on failure
whether it must start before or after certain system targets
A basic unit file for boot-time execution might include:
[Unit]
Description=Custom startup script
After=network.target
[Service]
ExecStart=/path/to/script.sh
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
After creating the file under /etc/systemd/system, the administrator enables it using:
systemctl enable name.service
This ensures systemd creates a symbolic link so the service runs each time the system boots.
Systemd startup management provides robust handling of dependencies. If the script must run before networking, after storage initialization, or at a specific point in the boot sequence, the administrator can set ordering constraints using Before= or After= lines.
Because /etc/systemd/system is the official location for administrator-defined unit files and the correct mechanism for persistent boot-time scripts, option C is correct.
Question 93:
A Linux system engineer needs to compress a directory using the xz compression format to achieve maximum compression efficiency. Which tar command produces a .tar.xz archive from the directory /data/libfiles?
A) tar -cvf libfiles.tar.xz /data/libfiles
B) tar -xvJf libfiles.tar.xz /data/libfiles
C) tar -cJvf libfiles.tar.xz /data/libfiles
D) tar -zcvf libfiles.tar.xz /data/libfiles
Answer:
C
Explanation:
The correct command is tar -cJvf libfiles.tar.xz /data/libfiles because the -J option instructs tar to use xz compression. Xz provides high compression ratios, making it ideal for archiving large directories where space efficiency is a priority.
Option A creates a standard tar archive but does not apply compression, even though the filename ends with .xz.
Option B extracts an archive instead of creating one.
Option D uses gzip compression (-z), producing a .tar.gz file, not an .xz-compressed archive.
Tar supports multiple compression types, and the administrator must explicitly specify which compression algorithm to use. The -J flag signals tar to compress using xz, resulting in an archive that is typically smaller than gzip-compressed equivalents. This is useful for storing libraries, logs, source code, and other large datasets.
Creating compressed tar archives preserves:
directory structure
file metadata
symbolic links
permissions
extended attributes (when supported)
Administrators choose xz when prioritizing smaller archive size over compression speed. Although xz compression takes longer, the resulting files are often significantly smaller.
Because tar -cJvf is the correct syntax for xz-based compression, option C is correct.
Question 94:
A Linux administrator needs to check the disk I/O performance of a system by generating controlled read/write operations on a block device. Which tool is specifically designed for flexible, high-performance I/O testing?
A) fsck
B) badblocks
C) fio
D) blkid
Answer:
C
Explanation:
The correct tool is fio because it performs highly configurable read/write tests on block devices, filesystems, and file objects. Fio supports random and sequential workloads, variable block sizes, queue depths, I/O engines, job files, and report formats. It is a powerful benchmark utility widely used to analyze storage subsystem performance.
Option A, fsck, checks and repairs filesystem inconsistencies but does not measure I/O performance.
Option B, badblocks, scans disks for defective sectors but generates only limited, simple access patterns—not representative workload tests.
Option D, blkid, identifies block devices and their filesystem types. It provides no performance testing.
Fio allows administrators to create custom workloads that replicate real-world usage. This is especially important for evaluating:
SSD performance
RAID array throughput
virtualization storage tiers
database-optimized workloads
random read/write latency
sequential throughput
mixed read/write patterns
Fio job files enable describing scenarios in detail, such as:
block size
read vs write ratio
I/O engine (such as libaio)
queue depth
runtime
number of parallel jobs
Because fio provides complete control over test parameters and generates accurate performance metrics, it is the correct answer.
Question 95:
A Linux administrator needs to list all environment variables available to the current session to verify whether a custom variable is being applied correctly. Which command displays these variables?
A) printenv
B) alias
C) export VAR=value
D) unset VAR
Answer:
A
Explanation:
The correct command is printenv because it prints all environment variables for the current shell session. Environment variables influence programs, shell behavior, user preferences, and system utilities. When evaluating whether a variable such as PATH, LANG, JAVA_HOME, or a custom application variable is applied correctly, printenv provides a complete listing.
Option B, alias, shows command shortcuts but not environment variables.
Option C, export VAR=value, sets a variable rather than displaying it.
Option D, unset VAR, removes a variable instead of showing available ones.
Administrators must review environment variables during troubleshooting tasks such as:
verifying application configuration
confirming paths for executables
ensuring locale settings are correct
validating user-specific or system-wide variables
confirming whether scripts receive required data
Printenv displays each variable in key=value form, making it easy to scan or further filter with grep when looking for a specific variable.
Because printenv provides the clear, complete environment variable listing required, option A is correct.
Question 96:
A Linux administrator needs to prevent a specific kernel module from loading during system boot to avoid hardware conflicts. Which method correctly ensures the module named usbhold is permanently blacklisted?
A) rmmod usbhold
B) echo “blacklist usbhold” >> /etc/modprobe.d/blacklist.conf
C) chmod 000 /lib/modules/usbhold.ko
D) killall usbhold
Answer:
B
Explanation:
The correct method is echo “blacklist usbhold” >> /etc/modprobe.d/blacklist.conf because blacklisting a module through modprobe configuration files prevents the kernel from loading it automatically during boot or in response to hardware detection. Blacklisting is the recommended, persistent, and system-supported method for ensuring a module stays unloaded. This is used when a module causes instability, conflicts with other drivers, or needs to be suppressed for security or compatibility reasons.
Option A, rmmod usbhold, removes the module from a running system but does not persist across reboots. Once the system restarts or when hardware triggers the kernel module loader, usbhold will load again unless blacklisted.
Option C, chmod 000 on the module file, is not appropriate. Changing permissions on kernel module files can cause system inconsistencies, confuse package managers, and potentially interfere with kernel updates. It is not considered a safe or maintainable approach.
Option D, killall usbhold, is irrelevant. Modules are not processes and cannot be terminated with killall. Modules operate in kernel space, not user space.
Blacklisting modules through modprobe configurations guarantees predictability. Modprobe reads configuration files stored under /etc/modprobe.d. These files define behaviors such as:
blacklisting modules
aliasing modules
assigning specific loading parameters
When echo “blacklist usbhold” >> /etc/modprobe.d/blacklist.conf is executed, it appends a directive that instructs the kernel not to load that module automatically. During boot, the kernel module loader checks these configuration files, and if a module is blacklisted, automatic loading is skipped even if hardware signals its presence.
This method works across system restarts and kernel upgrades, making it the safest and most reliable long-term solution. Blacklisting is frequently required for incompatible hardware drivers, unnecessary subsystems, or administrative policies that prohibit certain functionality.
Because adding a blacklist entry in /etc/modprobe.d is the correct persistent method to disable module loading, option B is correct.
Question 97:
A Linux engineer must safely repair a corrupted filesystem on /dev/sdc1 while ensuring no data is being written to it. Which command sequence is required before running a filesystem check?
A) fsck /dev/sdc1
B) mount -o remount,rw /dev/sdc1; fsck /dev/sdc1
C) umount /dev/sdc1; fsck /dev/sdc1
D) chmod 444 /dev/sdc1; fsck /dev/sdc1
Answer:
C
Explanation:
The correct sequence is umount /dev/sdc1; fsck /dev/sdc1 because fsck must never be run on a mounted filesystem. Running filesystem repair tools on active filesystems risks data corruption, because processes may still be accessing or modifying files. Unmounting ensures the filesystem is in a consistent, stable state for examination and repair.
Option A, fsck /dev/sdc1, is unsafe when the partition is still mounted. Running fsck without unmounting can severely damage structural metadata, resulting in lost files or unstable filesystem states.
Option B, mount -o remount,rw, makes the filesystem writable, which is the opposite of what is required. A filesystem must not be mounted at all during fsck. Remounting it read/write increases the chance of data being modified during inspection.
Option D, chmod 444 /dev/sdc1, changes permissions on the device file but does not unmount the filesystem or prevent access by processes already using it. Filesystem checking will still be unsafe and potentially destructive.
Unmounting must always be the first step before filesystem repair. If the filesystem is busy, administrators can use tools such as lsof or fuser to identify processes holding it open, or umount -l for lazy unmounts in emergency conditions. Filesystems like ext4, xfs, and others require exclusive access before structural checks.
Fsck inspects the filesystem for errors such as:
corrupted inodes
orphaned blocks
invalid directory entries
misaligned allocation tables
inconsistent metadata
Repair operations may rewrite important metadata structures, so ensuring no process interferes is essential. If the filesystem cannot be unmounted because it is the root filesystem, administrators must instead boot into rescue mode or use a maintenance environment.
Because unmounting before running fsck is mandatory for safe repair, option C is correct.
Question 98:
A Linux administrator needs to assign a persistent device name to a network interface so it always appears as netpublic0 regardless of hardware detection order. Which method correctly achieves this using udev rules?
A) ip link set eth0 name netpublic0
B) echo “NAME=netpublic0” >> /etc/hostname
C) Add a rule in /etc/udev/rules.d/70-persistent-net.rules
D) reboot and let systemd rename interfaces automatically
Answer:
C
Explanation:
The correct method is adding a rule in /etc/udev/rules.d/70-persistent-net.rules because udev provides device naming rules that ensure consistent interface naming based on attributes such as MAC address, driver, or bus path. Udev rules persist across reboots and ensure an interface always receives the specified name.
Option A, ip link set eth0 name netpublic0, renames the interface temporarily. The change is lost after reboot or after systemd reinitializes interfaces.
Option B, modifying /etc/hostname, sets the system hostname and has nothing to do with interface naming.
Option D, rebooting and letting systemd rename interfaces automatically, does not guarantee specific names. Systemd uses predictable naming rules but does not allow administrators to assign arbitrary custom names without explicit configuration.
Udev rules allow patterns such as:
SUBSYSTEM==”net”, ACTION==”add”, ATTR{address}==”aa:bb:cc:dd:ee:ff”, NAME=”netpublic0″
This rule ensures that any interface with the given MAC address receives the name netpublic0. Administrators can also match other attributes such as PCI slot position or driver. Persistent naming is crucial in environments where:
multiple NICs exist
network configuration must remain stable
hardware is replaced
virtualized systems reorder devices
bonding, bridging, or VLAN setups depend on interface naming
Without persistent naming, interfaces may appear in different orders after a reboot, breaking network configurations. Using properly crafted udev rules prevents such issues.
Because defining udev rules in /etc/udev/rules.d ensures a permanent custom name, option C is correct.
Question 99:
A Linux engineer needs to test DNS resolution for a specific record type, such as TXT, for the domain corpzone.test. Which command performs a targeted DNS query for a specific record type?
A) nslookup corpzone.test
B) dig corpzone.test
C) dig corpzone.test TXT
D) host -a corpzone.test
Answer:
C
Explanation:
The correct command is dig corpzone.test TXT because the last parameter explicitly instructs dig to query for a specific record type. This is essential when verifying records such as SPF, DKIM, or service identification stored in TXT fields.
Option A, nslookup corpzone.test, may return basic results but does not guarantee retrieval of specific record types unless further interactive commands are entered.
Option B, dig corpzone.test, returns the default A record unless additional parameters are specified. It does not automatically query TXT records.
Option D, host -a, performs a complete DNS dump of all record types. This may be excessive, slower, and less targeted when verifying only one type.
Targeted DNS queries are critical in scenarios such as:
validating authentication frameworks
checking service configurations
diagnosing DNS caching issues
verifying propagation of specific TXT-based policies
troubleshooting application issues relying on DNS metadata
Dig provides flexible querying options including specifying servers, timeouts, record types, or verbose output. Administrators rely on dig for its clarity and fine control.
Because dig corpzone.test TXT directly queries the required record type, option C is correct.
Question 100:
A Linux administrator needs to view CPU-specific details such as core count, model, cache size, and vendor information for performance tuning. Which command displays this information?
A) cat /proc/cpuinfo
B) lscpuinfo
C) archcpu
D) cpu -a
Answer:
A
Explanation:
The correct command is cat /proc/cpuinfo because this file contains detailed processor information provided directly by the kernel. It includes:
processor model
number of cores
cache sizes
flags indicating CPU capabilities
vendor identification
supported instruction sets
clock speed (as reported by kernel)
Option B, lscpuinfo, is not an actual Linux command, although some systems provide lscpu, which is similar but not listed here.
Option C, archcpu, is not a standard command.
Option D, cpu -a, is also not a recognized command.
The /proc filesystem dynamically exposes kernel information. Reading /proc/cpuinfo allows administrators to check whether CPUs support features such as virtualization extensions, cryptographic instructions, or advanced instruction sets. This information is necessary when:
enabling hypervisors
tuning performance-sensitive applications
verifying hardware compatibility
identifying mismatched CPUs in multi-socket systems
diagnosing performance bottlenecks
The output includes flags indicating CPU capabilities such as:
vmx or svm (virtualization)
aes (hardware encryption)
sse and avx instruction sets
multicore support
Because /proc/cpuinfo provides the deepest, most accurate CPU details across distributions, option A is correct.