Visit here for our full IAPP CIPP-US exam dumps and practice test questions.
Question 121:
Under the California Consumer Privacy Act (CCPA), which of the following is considered a “sale” of personal information?
A) Sharing personal information with third parties for monetary or other valuable consideration
B) Transferring personal information to service providers under contract
C) Disclosing personal information pursuant to a court order
D) Sharing personal information with affiliates for internal business purposes
Answer: A
The correct answer is option A. The CCPA defines “sale” broadly to include sharing, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating personal information to another business or third party for monetary or other valuable consideration. This definition extends beyond traditional commercial transactions to include exchanges where businesses receive value.
The CCPA’s expansive sale definition captures common business practices like sharing information with advertising networks for targeted advertising, providing data to data brokers or analytics companies, exchanging customer lists with marketing partners, and participating in cross-device tracking networks. “Valuable consideration” encompasses not just money but other benefits like access to another company’s data, services provided in exchange for data, or participation in marketing cooperatives. This broad definition requires businesses to evaluate data sharing arrangements they might not traditionally consider “sales,” implementing “Do Not Sell My Personal Information” opt-out rights for California consumers. Businesses must provide clear notice of sale practices in privacy policies, offer conspicuous “Do Not Sell” links on websites and mobile apps, respect consumer opt-out requests by ceasing sales within required timeframes, and maintain records of consumer opt-outs for compliance verification. The sale definition includes several exceptions: transfers to service providers meeting CCPA requirements don’t constitute sales, disclosures for specific business purposes under contract limitations aren’t sales, and asset transfers in mergers or bankruptcy aren’t considered sales under certain conditions. Understanding what constitutes a sale is critical for CCPA compliance, as mischaracterizing sales can result in violations and consumer complaints.
Option B is incorrect because the CCPA specifically exempts information shared with service providers under contracts meeting CCPA requirements from the definition of “sale.” Service providers must use information only for specified business purposes and are contractually prohibited from selling the information.
Option C is incorrect because disclosures made pursuant to legal obligations like court orders, subpoenas, or regulatory requirements are not considered “sales” under CCPA. These disclosures are mandatory legal compliance activities outside the sale definition.
Option D is incorrect because sharing personal information with affiliates for internal business purposes doesn’t constitute a sale under CCPA, provided the information is used for the business’s operational purposes and not further disclosed to unaffiliated third parties for value.
Question 122:
A company operates call centers in multiple states and records customer service calls. Under which federal law must the company obtain consent before recording calls?
A) Federal Wiretap Act (Title III)
B) Stored Communications Act
C) Computer Fraud and Abuse Act
D) Electronic Signatures in Global and National Commerce Act
Answer: A
The correct answer is option A. The Federal Wiretap Act, also known as Title III of the Omnibus Crime Control and Safe Streets Act of 1968, regulates the interception of wire, oral, and electronic communications, including telephone call recordings. The Act generally requires one-party consent for call recording, meaning one party to the conversation must consent to the recording.
Under federal law, call recording is generally permissible if one party to the conversation consents to the recording. However, many states have stricter requirements, with eleven states requiring all-party consent (California, Connecticut, Florida, Illinois, Maryland, Massachusetts, Michigan, Montana, New Hampshire, Pennsylvania, and Washington). Companies operating nationally must comply with the most restrictive applicable state law, typically requiring consent from all parties when recording calls involving residents of all-party consent states. Best practices for call recording include providing clear notice that calls may be recorded for quality assurance or training purposes through recorded messages at call beginning, obtaining explicit consent through verbal acknowledgment or continued participation after notification, training call center staff on recording requirements and proper notice procedures, and implementing systems preventing recording until proper consent is obtained. The Federal Wiretap Act provides both criminal penalties for willful violations and civil liability for damages, creating significant compliance risks for improper call recording. Organizations should document consent practices, maintain records showing consent was obtained, and regularly audit recording practices ensuring compliance. The rise of call recording for customer service, sales calls, and dispute resolution makes proper consent critical to avoiding both federal and state liability.
Option B is incorrect because the Stored Communications Act addresses unauthorized access to stored electronic communications, not the interception of communications during transmission. The SCA protects stored emails and messages rather than governing call recording.
Option C is incorrect because the Computer Fraud and Abuse Act addresses unauthorized computer access and hacking, not call recording. The CFAA focuses on computer systems and network security rather than communication interception.
Option D is incorrect because the E-SIGN Act addresses the legal validity of electronic signatures and records in commerce, not call recording consent requirements. E-SIGN facilitates electronic transactions but doesn’t govern communication interception.
Question 123:
Under the Health Insurance Portability and Accountability Act (HIPAA), which entity is considered a “covered entity”?
A) Health plans, healthcare providers, and healthcare clearinghouses
B) Technology vendors providing services to hospitals
C) Employers offering health insurance to employees
D) Mobile health app developers
Answer: A
The correct answer is option A. HIPAA defines three types of covered entities that must comply with Privacy and Security Rules: health plans (including health insurance companies, HMOs, Medicare, and Medicaid), healthcare providers who transmit health information electronically in connection with HIPAA transactions (doctors, hospitals, pharmacies, clinics), and healthcare clearinghouses that process health information from nonstandard to standard formats.
Covered entities must comply with HIPAA’s comprehensive requirements including implementing administrative, physical, and technical safeguards protecting electronic protected health information (ePHI), providing individuals rights to access their health information, obtaining authorizations for uses and disclosures beyond treatment, payment, and healthcare operations, training workforce members on HIPAA requirements and privacy practices, executing business associate agreements with vendors accessing PHI, conducting risk assessments and implementing security measures addressing identified risks, and responding to individuals exercising their HIPAA rights including access requests and breach notifications. The covered entity determination is critical because it determines HIPAA applicability – entities not meeting covered entity definitions generally aren’t subject to HIPAA (though they may be subject to other privacy laws). Healthcare providers become covered entities only when conducting electronic transactions in standard formats (like electronic billing or claims), meaning providers using only paper-based systems traditionally weren’t covered entities. However, the prevalence of electronic health records and billing systems means most healthcare providers now qualify as covered entities. Covered entities may disclose PHI to business associates, who must also comply with HIPAA through contractual obligations and direct liability for certain provisions. Understanding covered entity status is fundamental to determining HIPAA compliance obligations.
Option B is incorrect because technology vendors providing services to hospitals are typically business associates rather than covered entities. Business associates have contractual obligations to protect PHI but are not themselves covered entities unless they also independently qualify as health plans, providers, or clearinghouses.
Option C is incorrect because employers offering health insurance are generally not covered entities under HIPAA, even though they sponsor health plans. The health plan itself is the covered entity, and strict rules separate the plan’s PHI from the employer’s access.
Option D is incorrect because mobile health app developers typically are not covered entities under HIPAA unless they meet specific definitions of health plans, providers, or clearinghouses. Most health apps fall outside HIPAA’s scope, though they may be subject to FTC consumer protection authority.
Question 124:
A company receives a National Security Letter (NSL) requesting customer information. What is a key characteristic of NSLs that distinguishes them from traditional subpoenas?
A) NSLs can include gag orders preventing disclosure of the request
B) NSLs require judicial approval before issuance
C) NSLs can only be issued for terrorism investigations
D) NSLs expire after 30 days
Answer: A
The correct answer is option A. National Security Letters are administrative subpoenas issued by FBI officials and certain other federal agencies without judicial oversight, and they historically included nondisclosure requirements (gag orders) preventing recipients from disclosing the existence of the NSL to anyone, including affected individuals. These gag orders distinguished NSLs from traditional legal process.
NSLs were significantly expanded under the USA PATRIOT Act and authorize government agencies to compel disclosure of certain records and information relevant to national security investigations without requiring judicial approval. Agencies can issue NSLs to obtain subscriber information, toll billing records, electronic communication transactional records, and financial records. The nondisclosure provisions initially prohibited recipients from revealing NSL receipt indefinitely, raising First Amendment concerns about government secrecy and lack of transparency. Following legal challenges, the USA FREEDOM Act of 2015 reformed NSL nondisclosure provisions, requiring government to show specific harm justifying continued nondisclosure, allowing judicial review of nondisclosure orders, and establishing automatic expiration of nondisclosure after three years unless extended. Recipients can now challenge nondisclosure orders and must be informed of judicial review rights. Despite reforms, NSL gag orders remain controversial, limiting transparency about government surveillance and preventing companies from fully disclosing law enforcement data requests to customers or the public. Recipients should consult legal counsel upon receiving NSLs, consider challenging nondisclosure orders when appropriate, and track expiration timelines for disclosure restrictions. The tech industry has advocated for further NSL reforms including judicial authorization requirements and narrower nondisclosure provisions to improve transparency and oversight.
Option B is incorrect because NSLs specifically do not require judicial approval before issuance, which is one of their controversial characteristics. Unlike traditional subpoenas or search warrants, NSLs are issued by executive branch officials without court oversight.
Option C is incorrect because while NSLs must relate to national security or foreign intelligence investigations, they’re not limited exclusively to terrorism cases. NSLs can be issued for various national security matters including espionage, foreign intelligence, and international terrorism.
Option D is incorrect because NSLs don’t have automatic 30-day expiration periods. The nondisclosure provisions now have review mechanisms and can expire, but the underlying NSL authority and information requests don’t automatically expire after 30 days.
Question 125:
Under the Family Educational Rights and Privacy Act (FERPA), when can a school disclose student education records without parental consent?
A) To school officials with legitimate educational interests
B) To potential employers conducting background checks
C) To marketing companies for student recruitment
D) To private investigators conducting investigations
Answer: A
The correct answer is option A. FERPA allows schools to disclose education records without prior parental consent to school officials with legitimate educational interests, meaning officials who need information to fulfill their professional responsibilities in serving students, including teachers, administrators, support staff, and contractors performing institutional services.
FERPA establishes students’ and parents’ rights regarding education records, restricting disclosure without consent while recognizing exceptions for legitimate school operations and specific external parties. Schools may disclose records without consent to school officials if the official has legitimate educational interest, the school’s annual notification defines school officials and legitimate educational interests, and disclosure is necessary for the official to perform educational functions. Beyond school officials, FERPA permits disclosure without consent to officials of other schools where the student seeks enrollment, certain government officials for audit or evaluation purposes, appropriate parties in connection with financial aid, organizations conducting studies for schools, accrediting organizations, compliance with judicial orders or subpoenas (with requirements to make reasonable efforts to notify parents before disclosure), appropriate parties in health or safety emergencies, and state and local authorities within the juvenile justice system under specific circumstances. Schools must maintain records of most disclosures, track who received information and when, and make disclosure records available to parents upon request. Parents have rights to inspect and review education records, request corrections to inaccurate information, and control most disclosures of their children’s records until the student reaches age 18 or attends postsecondary institutions. Violations can result in loss of federal education funding.
Option B is incorrect because FERPA does not permit disclosure to potential employers without consent. While schools can disclose directory information if properly designated, detailed education records require consent for employment background checks unless a specific FERPA exception applies.
Option C is incorrect because disclosure to marketing companies for recruitment purposes requires written consent under FERPA. Schools cannot sell or otherwise disclose student information to marketers without proper authorization, though directory information may be disclosed if not opted out.
Option D is incorrect because private investigators are not among the categories of recipients who can receive education records without consent. Unless a court order or subpoena is involved (triggering different FERPA provisions), private investigators need consent to access education records.
Question 126:
A financial institution is required to provide customers annual privacy notices. Under which federal law is this requirement established?
A) Gramm-Leach-Bliley Act (GLBA)
B) Fair Credit Reporting Act (FCRA)
C) Equal Credit Opportunity Act (ECOA)
D) Truth in Lending Act (TILA)
Answer: A
The correct answer is option A. The Gramm-Leach-Bliley Act, also known as the Financial Services Modernization Act of 1999, requires financial institutions to provide customers with initial and annual privacy notices explaining information collection, sharing, and protection practices, as well as customers’ opt-out rights regarding certain information sharing.
The GLBA Privacy Rule requires financial institutions to provide clear, conspicuous privacy notices at customer relationship establishment and annually thereafter, explaining what information the institution collects, how it shares information with affiliates and nonaffiliated third parties, how it protects information, and customers’ rights to opt out of certain sharing practices. Notices must be clear and conspicuous, using plain language understandable to customers, and provided in a form the customer can retain. The annual notice requirement ensures customers receive regular updates about the institution’s privacy practices and opt-out rights, though regulatory amendments (particularly the FAST Act) have created exceptions to annual notice requirements when information sharing practices haven’t changed and no opt-out rights exist. The Safeguards Rule under GLBA requires financial institutions to implement comprehensive information security programs protecting customer information through administrative, technical, and physical safeguards. Covered institutions include not just traditional banks but also mortgage companies, loan brokers, check cashing services, and certain financial advisors. Compliance requires designating employees to coordinate information security programs, conducting risk assessments, implementing and monitoring safeguards, selecting appropriate service providers and contractually requiring information protection, and regularly evaluating and adjusting security programs. Enforcement is divided among federal financial regulators and the FTC, with violations resulting in regulatory actions and civil penalties.
Option B is incorrect because the Fair Credit Reporting Act regulates credit reporting agencies and use of consumer reports, not privacy notices from financial institutions about their own practices. FCRA requires adverse action notices when credit is denied based on credit reports.
Option C is incorrect because the Equal Credit Opportunity Act prohibits discrimination in credit transactions based on protected characteristics but doesn’t establish privacy notice requirements. ECOA focuses on fair lending rather than privacy disclosures.
Option D is incorrect because the Truth in Lending Act requires disclosures about credit terms and costs, promoting informed credit decisions through standardized disclosures, but doesn’t mandate privacy notices about information practices.
Question 127:
Under the Video Privacy Protection Act (VPPA), what information is protected from disclosure without consumer consent?
A) Personally identifiable information about video materials rented or purchased
B) Credit card information used for video purchases
C) IP addresses accessing video content
D) All personal information collected by video providers
Answer: A
The correct answer is option A. The Video Privacy Protection Act of 1988 prohibits video tape service providers from knowingly disclosing personally identifiable information about consumers’ video materials rented, purchased, or otherwise obtained without written consent. The Act protects the privacy of individuals’ video viewing habits and choices.
The VPPA was enacted following controversy when a newspaper published Supreme Court nominee Robert Bork’s video rental history during confirmation hearings, raising privacy concerns about sensitive information revealed through video choices. The Act requires written consent before disclosing personally identifiable information about video materials, defines “video tape service provider” to include entities engaged in rental, sale, or delivery of prerecorded video materials, and establishes civil liability for violations including actual damages or liquidated damages. Courts have interpreted the VPPA to apply to modern streaming services, not just traditional video rental stores, expanding the Act’s reach to cover Netflix, Hulu, and other digital platforms. The Act permits disclosure to law enforcement pursuant to warrant, court order, or subpoena, for ordinary course of business activities like delivery and billing, when requested by consumers, and when consumers provide informed written consent. Written consent must be obtained separately from other consents, cannot be a condition of service, and is effective for a limited period. Modern litigation has focused on whether disclosure of viewing information with subscriber names to third parties like Facebook (for social viewing features) violates the VPPA, with courts finding such disclosures can constitute violations. Companies should obtain clear, specific consent before sharing viewing information, limit disclosures to necessary business operations, and ensure technical implementations don’t inadvertently disclose protected information through tracking technologies or social features.
Option B is incorrect because the VPPA specifically protects information about video materials obtained by consumers, not payment information used for purchases. Credit card data is protected by other laws like the Fair Credit Billing Act and PCI DSS standards but not by VPPA.
Option C is incorrect because while IP addresses might be considered personally identifiable information in some contexts, the VPPA specifically protects information about video materials obtained, not all technical data like IP addresses unless connected to specific viewing information.
Option D is incorrect because the VPPA’s protection is narrower than all personal information collected by video providers. The Act specifically addresses information about video materials rented, purchased, or obtained rather than comprehensive consumer information.
Question 128:
A company wants to use automated technology to scan employee emails for data loss prevention purposes. Under which legal theory might employees challenge this practice?
A) Reasonable expectation of privacy
B) First Amendment free speech
C) Fourth Amendment search and seizure
D) Due process violations
Answer: A
The correct answer is option A. Employees might challenge email monitoring based on reasonable expectation of privacy theory, arguing they have legitimate privacy expectations in their workplace communications. However, courts generally find employees have diminished or no reasonable privacy expectations in workplace email when employers provide clear notice of monitoring and establish appropriate policies.
The reasonable expectation of privacy test, derived from Fourth Amendment jurisprudence but applied more broadly in privacy torts, examines whether individuals have subjective privacy expectations that society recognizes as reasonable. In workplace contexts, privacy expectations are significantly reduced, particularly for communications on employer-provided systems. Courts consider whether the employer has a legitimate business need for monitoring, whether employees received clear notice that communications may be monitored, whether the employer has written policies regarding monitoring and acceptable use, and whether employees have alternative private communication methods available. Most courts find no reasonable privacy expectation when employers provide explicit notice that email may be monitored for business purposes like security, compliance, or productivity, maintain written policies reserving the right to access workplace communications, and implement monitoring for legitimate business purposes rather than arbitrary intrusion. Best practices for email monitoring include providing clear written policies notifying employees that workplace email may be monitored, requiring employees to acknowledge monitoring policies, limiting monitoring to legitimate business purposes, training managers on appropriate monitoring practices, and maintaining confidentiality of information discovered through monitoring unless business needs or legal obligations require disclosure. Unionized workplaces may face additional requirements to bargain about monitoring practices with employee representatives.
Option B is incorrect because First Amendment free speech protections apply to government restrictions on speech, not private employer policies. Unless the employer is a government entity, First Amendment challenges to email monitoring generally fail.
Option C is incorrect because Fourth Amendment protections against unreasonable searches apply to government action, not private employers. Private companies don’t conduct “searches” within the Fourth Amendment’s meaning unless acting as government agents.
Option D is incorrect because due process protections require government action depriving individuals of life, liberty, or property interests. Private employer email monitoring doesn’t typically implicate due process rights unless the employer is a government entity or acts under color of state law.
Question 129:
Under the Telephone Consumer Protection Act (TCPA), which type of calls requires prior express written consent?
A) Telemarketing calls using artificial or prerecorded voices
B) Informational calls from businesses with existing relationships
C) Calls to business phone numbers
D) Manually dialed calls from live agents
Answer: A
The correct answer is option A. The Telephone Consumer Protection Act requires prior express written consent before making telemarketing calls or texts to consumers using artificial or prerecorded voices or automatic telephone dialing systems (autodialers). This consent requirement protects consumers from unwanted automated marketing calls.
The TCPA distinguishes between different consent standards based on call purpose and technology used. Prior express written consent requires written agreement containing specific disclosures, signed by the consumer, authorizing specific parties to deliver specific types of calls using automatic dialing systems or prerecorded voices. The written consent must clearly authorize calls or texts to a specific phone number, disclose that consent isn’t required as a purchase condition, and be provided on paper or electronically through compliance with E-SIGN Act requirements. For non-telemarketing informational calls (like appointment reminders or delivery notifications), prior express consent suffices, which can be given orally or in writing and doesn’t require the same formality as express written consent. Manually dialed telemarketing calls from live agents require only prior express consent rather than written consent, though consumers can still revoke consent. TCPA violations carry strict liability with statutory damages of $500 per violation, trebled to $1,500 for knowing or willful violations, creating significant liability exposure for companies with large call volumes. Class action lawsuits under TCPA can result in substantial settlements given per-call damages and high call volumes. Companies should implement robust consent management systems documenting when and how consent was obtained, provide clear revocation mechanisms, train calling agents on TCPA requirements, and regularly audit calling practices for compliance. Recent litigation has focused on consent scope, revocation procedures, and whether specific technologies constitute autodialers under evolving interpretations.
Option B is incorrect because informational calls to existing business relationship customers don’t require prior express written consent, though they still require prior express consent. The stricter written consent standard applies to telemarketing calls using restricted technologies.
Option C is incorrect because calls to business phone numbers are generally exempted from TCPA restrictions. The Act primarily protects consumers’ residential and mobile numbers rather than business lines.
Option D is incorrect because manually dialed calls from live agents, even for telemarketing, don’t require prior express written consent under TCPA. The stricter consent requirement applies when autodialers or prerecorded voices are used, not manual dialing by humans.
Question 130:
A data broker wants to sell consumer information for marketing purposes. Under the Fair Credit Reporting Act (FCRA), when does the data broker become subject to FCRA requirements?
A) When the information is used for eligibility decisions like credit, employment, or insurance
B) Whenever any consumer information is sold
C) Only when credit scores are sold
D) When the data broker has more than 10,000 consumer records
Answer: A
The correct answer is option A. Data brokers become consumer reporting agencies subject to FCRA when they assemble or evaluate consumer information for the purpose of furnishing consumer reports to third parties who use the information for eligibility determinations in credit, employment, insurance, or other purposes established in FCRA.
The FCRA regulates consumer reporting agencies (CRAs), which are entities that regularly assemble or evaluate consumer information for the purpose of furnishing consumer reports bearing on creditworthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living used for credit, employment, insurance, or other eligibility decisions. Whether a data broker is a CRA depends on the purpose for which information is assembled and used, not the type of information collected. Data brokers selling information purely for marketing purposes typically aren’t CRAs because marketing isn’t an FCRA-covered purpose. However, if the same information is used or intended for credit, employment, insurance, or other eligibility decisions, the broker becomes a CRA subject to comprehensive FCRA obligations including ensuring permissible purposes exist before providing reports, maintaining reasonable procedures ensuring maximum possible accuracy, investigating consumer disputes within required timeframes, providing consumers access to their reports and dispute rights, and implementing security measures protecting consumer information. The distinction between marketing data and consumer reports can be subtle – information marketed for “prequalification” or “risk assessment” might constitute consumer reports even when also used for marketing. Recent attention to data brokers has led to proposals for expanded regulation beyond FCRA, recognizing that many data brokers operate outside traditional consumer reporting models but still collect and sell sensitive information affecting consumers.
Option B is incorrect because not all consumer information sales trigger FCRA. Data brokers selling information for marketing, research, or other non-FCRA purposes generally aren’t consumer reporting agencies unless the information is used for covered eligibility decisions.
Option C is incorrect because FCRA applies beyond credit scores to various types of consumer information when used for covered purposes. Consumer reports can include information about payment history, employment history, public records, and other data relevant to eligibility decisions.
Option D is incorrect because FCRA doesn’t establish record volume thresholds for CRA determination. A data broker with few records can still be a CRA if it regularly furnishes consumer reports for covered purposes, while large brokers operating outside covered purposes aren’t CRAs.
Question 131:
An employer conducts social media background checks on job applicants using a third-party screening company. Under the Fair Credit Reporting Act, what is REQUIRED?
A) Obtain applicant consent and provide adverse action notices if adverse decisions are made
B) Only check publicly available social media information
C) Notify the social media platforms about the screening
D) Wait 30 days after application before conducting checks
Answer: A
The correct answer is option A. When employers use third-party companies to conduct background checks, including social media screening, for employment decisions, the screening constitutes a consumer report under FCRA, requiring employers to obtain written authorization from applicants before obtaining reports and provide adverse action notices if they take adverse employment action based on report information.
FCRA’s employment provisions require employers to obtain clear written authorization from applicants before procuring consumer reports for employment purposes, the authorization must be in a standalone document separate from the application, provide pre-adverse action notice if considering denying employment based on report contents including copy of the report and summary of FCRA rights, allow reasonable time for applicants to dispute report inaccuracies, and provide post-adverse action notice after making final denial decisions including information about the screening company and rights to obtain free report copies and dispute information. The pre-adverse action process gives applicants opportunity to explain or dispute report contents before final decisions, recognizing that background reports sometimes contain errors affecting employment opportunities. Common FCRA violations in employment screening include failing to obtain proper authorization, taking adverse action without required notices, using outdated information beyond FCRA time limits, and failing to conduct individualized assessments of conviction information. Employers should work with screening companies that understand FCRA compliance, train HR staff on adverse action procedures, document authorization and notice requirements, and establish clear policies for evaluating background information fairly. State and local laws may impose additional requirements, particularly regarding criminal history use in employment decisions. The EEOC has also issued guidance on avoiding disparate impact discrimination when using criminal records in hiring.
Option B is incorrect because FCRA requirements don’t limit screening to public information only. The obligations arise from using a third party to prepare reports for employment decisions, regardless of information source. Employers still need authorization and must follow adverse action procedures.
Option C is incorrect because FCRA doesn’t require notifying social media platforms about screening activities. The requirements focus on consumer consent and notification about report use, not informing the platforms where information is obtained.
Option D is incorrect because FCRA doesn’t mandate waiting periods before conducting background checks. While some state laws impose timing requirements for certain types of screening, federal FCRA focuses on authorization and adverse action procedures rather than timing restrictions.
Question 132:
Under the Children’s Online Privacy Protection Act (COPPA), what constitutes “actual knowledge” that a user is under 13 years old?
A) Information indicating the user is a child, even if not directly age-disclosed
B) Only when a child explicitly states their age
C) When a parent reports the child’s use of the service
D) After the FTC notifies the operator
Answer: A
The correct answer is option A. Under COPPA, operators have actual knowledge that a user is under 13 when they have information indicating the user is a child, which courts have interpreted broadly to include not just explicit age disclosures but also contextual information suggesting the user is a child, such as grade levels, age ranges, birthdates, or other content clearly indicating child age.
COPPA requires operators of websites and online services directed to children under 13, or who have actual knowledge they’re collecting personal information from children under 13, to obtain verifiable parental consent before collecting, using, or disclosing children’s personal information. The “actual knowledge” standard creates liability when operators know or should know users are children based on available information. Courts have found actual knowledge when operators receive information in registration forms indicating child age (like selecting “5th grade” or birth dates showing age under 13), when operators review user-generated content revealing children’s ages, when services are designed and marketed to children even if operators claim to prohibit child users, and when operators have mechanisms to identify children but deliberately avoid using them. The FTC has brought enforcement actions against companies that claimed no actual knowledge despite clear signals that users were children. Operators uncertain about user ages should implement age-gating mechanisms asking users their ages before collecting personal information, use neutral age verification methods, and err on the side of treating uncertain users as children requiring parental consent. Age-gating helps establish that operators lack actual knowledge when children misrepresent their ages. However, sites directed to children can’t avoid COPPA obligations through age-gating alone, as they’re covered regardless of actual knowledge.
Option B is incorrect because actual knowledge isn’t limited to explicit age statements. Courts have found operators have actual knowledge based on contextual information indicating child age, even when children don’t directly state “I am 12 years old.”
Option C is incorrect because while parental reports that children are using services would certainly provide actual knowledge, operators can have actual knowledge through other means before parents report child usage. The standard doesn’t require parental notification to establish knowledge.
Option D is incorrect because actual knowledge exists when the operator has information indicating child age, not only after regulatory notification. Waiting for FTC notification would defeat COPPA’s protective purposes, and operators can’t ignore evidence of child users until formally notified.
Question 133:
A company wants to implement a bring-your-own-device (BYOD) program allowing employees to use personal smartphones for work. What is the PRIMARY privacy concern for employees?
A) Employer access to personal information on devices
B) Device storage capacity
C) Cellular plan data limits
D) Device warranty coverage
Answer: A
The correct answer is option A. The primary privacy concern for employees in BYOD programs is employer access to personal information stored on devices that also contain work-related data, including personal photos, messages, contacts, location data, browsing history, personal apps, and other non-work information that employers might access through mobile device management (MDM) software or investigation activities.
BYOD programs blur boundaries between personal and professional device use, creating privacy tensions when employers need to secure work data but employees want to maintain privacy over personal information. Technical solutions like MDM software allow employers to enforce security policies, remotely wipe devices, access data for investigations, monitor device locations, view installed apps, and access device contents including personal information. Employees worry that employers will access personal photos, messages, or other private data, monitor personal communications and activities, track locations during non-work hours, and remotely wipe devices including personal data if security concerns arise or employment ends. Best practices for balancing these concerns include containerization approaches separating work and personal data, clear written BYOD policies explaining employer access rights and circumstances, transparent communication about what monitoring occurs, minimum necessary access principles limiting employer access to work-related data, procedures requiring employee notice before device searches, and alternatives like corporate-owned personally enabled (COPE) devices providing employer control over work functions while preserving employee personal device privacy. Employees should understand BYOD policies before enrolling, back up personal data regularly, and recognize that device use for work may reduce personal privacy expectations. Employment lawyers and privacy professionals should collaborate on policies balancing organizational security needs with employee privacy rights.
Option B is incorrect because device storage capacity is a technical consideration rather than a privacy concern. While storage management might be relevant to BYOD programs, it doesn’t implicate employee privacy interests like employer access to personal information does.
Option C is incorrect because cellular plan data limits are financial and technical matters for employees to consider when using personal devices for work, but they’re not privacy concerns about information access or monitoring.
Option D is incorrect because device warranty coverage is a property and maintenance issue rather than a privacy concern. While employees might worry about warranty impacts of work use, this doesn’t relate to privacy or information protection.
Question 134:
A retailer experiences a data breach exposing customer payment card information. Which law specifically governs payment card data security standards the retailer must follow?
A) Payment Card Industry Data Security Standard (PCI DSS) – contractual obligation
B) Gramm-Leach-Bliley Act
C) Federal Trade Commission Act
D) Fair and Accurate Credit Transactions Act
Answer: A
The correct answer is option A. PCI DSS is not a law but rather a set of contractual security standards established by the Payment Card Industry Security Standards Council (PCI SSC), which merchants agree to follow through their contracts with payment card brands (Visa, Mastercard, American Express, Discover) and acquiring banks.
While PCI DSS isn’t federal legislation, it functions as mandatory security requirements for entities that store, process, or transmit payment card data through contractual obligations in merchant agreements. PCI DSS establishes comprehensive security requirements including maintaining secure network infrastructure with firewalls and encryption, protecting stored cardholder data through encryption and access controls, implementing vulnerability management through anti-virus and secure systems, enforcing strong access control measures limiting data access on need-to-know basis, regularly monitoring and testing networks for vulnerabilities and intrusions, and maintaining information security policies addressing security requirements. Compliance is verified through self-assessment questionnaires for smaller merchants or external audits for larger processors. Non-compliance can result in fines from payment card brands, increased transaction fees, termination of ability to process card payments, and liability for breach costs. Following breaches exposing payment card data, merchants face card brand assessments and fines, forensic investigation requirements, remediation mandates, and potential lawsuits from issuing banks and consumers. While PCI DSS is contractual rather than regulatory, state breach notification laws may reference PCI DSS compliance when determining breach notification obligations. The FTC has brought enforcement actions under Section 5 against companies with inadequate payment card security, using PCI DSS as an industry standard even though it’s not an FTC regulation. Retailers should understand that while PCI DSS compliance doesn’t guarantee breach immunity or eliminate legal liability, non-compliance creates significant business and legal risks.
Option B is incorrect because the Gramm-Leach-Bliley Act applies to financial institutions, not retailers. While GLBA requires financial institutions to protect customer information, it doesn’t specifically govern retailers’ payment card security.
Option C is incorrect because while the FTC can bring enforcement actions against companies with inadequate security under Section 5’s unfairness authority, the FTC Act itself doesn’t establish specific payment card security standards like PCI DSS provides.
Option D is incorrect because the Fair and Accurate Credit Transactions Act (FACTA) addresses identity theft prevention and credit report accuracy, including requirements for credit card receipt truncation and identity theft red flags, but doesn’t establish comprehensive payment card security standards like PCI DSS.
Question 135:
Under state data breach notification laws, what is the MOST common standard for determining whether notification is required?
A) Likelihood that breach involves risk of harm to individuals
B) Notification required for any unauthorized access to data
C) Only when more than 10,000 records are compromised
D) Notification required only for intentional breaches, not accidental
Answer: A
The correct answer is option A. Most state data breach notification laws require notification when a breach creates a reasonable likelihood of harm or risk to individuals whose information was compromised, rather than requiring notification for every unauthorized access regardless of risk. This risk-based approach allows organizations to conduct assessments before determining notification obligations.
State breach notification laws vary significantly but generally share common elements including defining what constitutes a “breach” (unauthorized acquisition of personal information), specifying what types of data trigger notification (personal information like SSN, driver’s license, financial account information), establishing who must provide notification (entities maintaining or owning data), determining when notification is required based on risk assessment, and setting timeframes for notification (ranging from “without unreasonable delay” to specific timeframes like 30-90 days). The risk assessment standard means organizations must evaluate whether compromised information could be used for identity theft or fraud, whether data was encrypted or otherwise secured, whether perpetrators obtained usable information, and what harm individuals might suffer from the breach. Many states provide safe harbors when data was encrypted or otherwise rendered unusable, recognizing that encrypted data breaches present lower risk. However, some states require notification for any unauthorized access to unencrypted personal information regardless of risk assessment. Organizations operating nationally must comply with all applicable state laws, creating complex multi-jurisdiction compliance challenges. Best practices include conducting thorough breach risk assessments using consistent methodologies, documenting assessment rationale for regulatory accountability, erring toward notification when risk is uncertain, engaging forensic experts and legal counsel for breach evaluation, and preparing notification processes for rapid deployment when required. Recent trends show states moving toward stricter notification requirements with shorter timeframes and broader definitions of regulated data.
Option B is incorrect because most state laws don’t require notification for every unauthorized access regardless of risk. The common standard involves risk assessment determining whether access creates harm likelihood, allowing organizations to avoid unnecessary notifications for low-risk incidents.
Option C is incorrect because record volume thresholds typically relate to specific additional notification obligations (like notifying state attorneys general or credit bureaus) rather than determining whether notification is required at all. States don’t generally exempt breaches affecting fewer than 10,000 people from all notification requirements.
Option D is incorrect because breach notification requirements apply to both intentional breaches (hacking, insider theft) and accidental breaches (lost laptops, misdirected emails), as long as unauthorized access to personal information occurred. Intent doesn’t determine notification obligations under most state laws.
Question 136:
A company wants to track website visitors across multiple websites using cookies for behavioral advertising. Under which self-regulatory program would the company display an advertising option icon?
A) Digital Advertising Alliance (DAA) Self-Regulatory Principles
B) Network Advertising Initiative (NAI) Code of Conduct
C) Interactive Advertising Bureau (IAB) Standards
D) Federal Trade Commission Guidelines
Answer: A
The correct answer is option A. The Digital Advertising Alliance’s Self-Regulatory Principles for Online Behavioral Advertising include requirements for enhanced notice through the Advertising Option Icon (also called “AdChoices”), a clickable icon appearing in or near online advertisements that provides transparency about data collection and use for behavioral advertising and offers consumers choices about targeted advertising.
The DAA self-regulatory program was developed by advertising industry organizations to address consumer privacy concerns about online behavioral advertising (OBA), which involves tracking consumers’ online activities across websites and over time to deliver targeted advertisements. The program requires participating companies to provide enhanced notice through the AdChoices icon linking to information about data collection and use for advertising, meaningful choice allowing consumers to opt out of behavioral advertising, reasonable security protections for collected data, accountability through enforcement mechanisms, and special protections for sensitive data categories. The blue triangle AdChoices icon appears in or near advertisements, clicking which takes users to explanations about why they saw the ad and tools to opt out of interest-based advertising from participating companies. While self-regulatory, the program has FTC support as an industry best practice and non-compliance can trigger FTC enforcement under Section 5 unfairness or deception authority. Companies participating in online behavioral advertising should register with DAA, implement the AdChoices icon on ads, honor consumer opt-out choices, and maintain compliance with DAA principles. Limitations of self-regulation include voluntary participation, potential gaps in enforcement, and limited consumer awareness of the program. Privacy advocates argue self-regulation doesn’t provide sufficient protection, pointing to complexity of opt-out processes and continued tracking after opt-out for some purposes. Nevertheless, DAA remains the primary industry self-regulatory framework for behavioral advertising transparency in the U.S.
Option B is incorrect because while the NAI Code of Conduct addresses behavioral advertising practices for NAI members, it doesn’t specifically require the Advertising Option Icon. NAI has its own opt-out mechanisms but the AdChoices icon is a DAA initiative.
Option C is incorrect because the IAB develops technical standards and best practices for digital advertising but doesn’t operate the Advertising Option Icon program. IAB collaborates with DAA but the icon program is specifically a DAA initiative.
Option D is incorrect because the FTC provides guidance and enforces against deceptive practices but doesn’t operate the AdChoices icon program. The FTC supports industry self-regulation like DAA but doesn’t directly manage the icon program.
Question 137:
An organization implements a privacy program and wants to measure its effectiveness. Which metric provides the BEST indication of program maturity?
A) Percentage of employees completing privacy training annually
B) Number of privacy policies published
C) Privacy team budget size
D) Number of third-party vendors
Answer: A
The correct answer is option A. The percentage of employees completing privacy training annually provides a meaningful indicator of privacy program maturity because it demonstrates organizational commitment to privacy awareness, cultural integration of privacy principles, consistent application of privacy practices across the workforce, and ongoing education keeping pace with evolving privacy requirements and risks.
Effective privacy programs require organization-wide awareness and accountability, making comprehensive privacy training critical to program success. Training completion rates indicate whether privacy education reaches the entire workforce rather than remaining isolated in compliance departments, whether the organization prioritizes privacy through mandatory training requirements, and whether privacy principles are integrated into daily operations through employee education. High completion rates suggest management support for privacy initiatives, effective training delivery mechanisms reaching dispersed workforces, and organizational culture valuing privacy protection. However, training completion alone doesn’t guarantee program effectiveness – training quality, behavior change resulting from training, and practical application of privacy principles matter more than mere completion statistics. Comprehensive privacy metrics should include multiple indicators such as privacy impact assessment completion rates for new projects, data subject request response timeliness, incident response effectiveness, vendor privacy compliance levels, and privacy-by-design implementation in product development. Training completion should be measured alongside behavioral indicators like privacy violation rates, employee reporting of privacy concerns, and audit findings to provide complete program assessment. Organizations should track training completion by department, role, and time period, investigate barriers to completion, ensure training content remains current and relevant, and measure knowledge retention and behavior change beyond simple completion.
Option B is incorrect because the number of privacy policies published doesn’t indicate program effectiveness or maturity. Organizations might have numerous policies that aren’t followed, while effective programs might have fewer but better-implemented policies. Policy quantity doesn’t equal program quality.
Option C is incorrect because privacy team budget size might enable program activities but doesn’t directly measure program maturity or effectiveness. Well-funded programs can still be ineffective, while resource-constrained programs might achieve strong results through strategic priorities and organizational commitment.
Option D is incorrect because the number of third-party vendors is a risk factor requiring management rather than a program maturity metric. More vendors typically increase privacy risk and complexity without indicating program effectiveness. Vendor count should be monitored but doesn’t measure program maturity.
Question 138:
A company uses algorithmic decision-making for employment hiring decisions. Under the FTC Act Section 5, what is a key concern regarding this practice?
A) Whether the algorithm produces discriminatory outcomes that harm consumers
B) Whether the algorithm is proprietary or open source
C) Whether the algorithm runs on cloud or local servers
D) Whether employees understand how the algorithm works
Answer: A
The correct answer is option A. Under FTC Act Section 5’s prohibition on unfair or deceptive practices, algorithmic decision-making that produces discriminatory outcomes causing substantial injury to consumers (including job applicants) could constitute an unfair practice, particularly when the harm isn’t reasonably avoidable and isn’t outweighed by benefits to consumers or competition.
The FTC has increasing focus on algorithmic fairness and artificial intelligence, recognizing that automated decision systems can embed and amplify bias, producing discriminatory outcomes based on race, gender, age, or other protected characteristics. Section 5 enforcement concerns include whether algorithms produce outcomes discriminating against protected classes, whether companies make deceptive claims about algorithm accuracy or fairness, whether inadequate testing or monitoring allowed biased outcomes to persist, whether consumers understand how decisions affecting them are made, and whether companies have reasonable security protecting algorithm inputs and outputs. The FTC has stated that use of AI and algorithms doesn’t exempt companies from existing consumer protection laws, and claims that “the algorithm did it” don’t absolve responsibility. Companies using algorithmic decision-making should conduct fairness testing assessing whether algorithms produce disparate impacts, implement monitoring detecting bias in real-world outcomes, maintain transparency about automated decision-making use, provide human review mechanisms for consequential decisions, and ensure reasonable data security. The employment context is particularly sensitive because hiring decisions significantly impact individuals’ economic opportunities. While the EEOC primarily enforces employment discrimination laws, the FTC could address algorithmic hiring practices under Section 5 when they produce unfair consumer harm or involve deceptive claims about algorithm accuracy. Organizations should collaborate with legal, HR, data science, and ethics teams ensuring algorithms used for consequential decisions are fair, transparent, and regularly audited.
Option B is incorrect because whether algorithms are proprietary or open source doesn’t determine legality under Section 5. Both proprietary and open source algorithms can produce discriminatory outcomes, and companies are responsible for ensuring fairness regardless of algorithm source.
Option C is incorrect because the computational infrastructure (cloud versus local servers) isn’t the FTC’s primary concern regarding algorithmic fairness. While security considerations might vary by infrastructure, the key issue is whether algorithms produce fair, non-discriminatory outcomes.
Option D is incorrect because while transparency and explainability are important considerations, the primary concern is whether algorithms produce discriminatory or harmful outcomes. Even if employees understand algorithms, discriminatory results would still violate Section 5 principles.
Question 139:
A fitness tracking app collects users’ health and location data. Under which FTC framework should the app company evaluate its data practices?
A) FTC’s Fair Information Practice Principles (FIPPs)
B) HIPAA Privacy Rule
C) FDA medical device regulations
D) OSHA workplace safety standards
Answer: A
The correct answer is option A. The FTC’s Fair Information Practice Principles provide the framework for evaluating commercial entities’ data practices when sector-specific laws don’t apply. FIPPs establish widely recognized privacy principles that the FTC uses when assessing whether data practices are unfair or deceptive under Section 5 authority.
FIPPs traditionally include notice/awareness (providing clear information about data practices), choice/consent (giving individuals control over information use), access/participation (allowing individuals to view and correct their information), integrity/security (maintaining accurate and secure data), and enforcement/redress (providing mechanisms to enforce principles and address violations). The FTC applies FIPPs when evaluating whether companies adequately protect consumer privacy, provide truthful privacy disclosures, maintain reasonable data security, and honor privacy commitments. For fitness tracking apps collecting health and location data, FTC would assess whether privacy policies clearly disclose data collection and use, users have meaningful choices about data sharing, reasonable security protects sensitive health information, data accuracy is maintained, and privacy promises are kept. Health data from consumer fitness apps typically isn’t covered by HIPAA because apps aren’t healthcare providers, health plans, or healthcare clearinghouses, making them subject to FTC oversight rather than HIPAA requirements. The FTC has brought numerous enforcement actions against health and fitness apps for inadequate security, deceptive privacy claims, and unfair data practices. Companies should implement FIPPs through comprehensive privacy programs including clear privacy policies and consent mechanisms, data minimization collecting only necessary information, purpose limitation using data only for disclosed purposes, reasonable security appropriate to data sensitivity, breach response procedures, and regular privacy assessments. While FIPPs provide guidance, they’re principles rather than specific legal requirements, and implementation details vary by context and business model.
Option B is incorrect because HIPAA applies to covered entities (healthcare providers, health plans, healthcare clearinghouses) and their business associates, not consumer fitness apps that aren’t providing healthcare services. Most health and fitness apps fall outside HIPAA’s scope.
Option C is incorrect because FDA medical device regulations apply to devices intended for medical purposes like diagnosis or treatment, not general wellness apps. While some sophisticated health apps might qualify as medical devices, typical fitness trackers are outside FDA regulation.
Option D is incorrect because OSHA workplace safety standards address employer obligations for safe working conditions, not consumer app data practices. OSHA wouldn’t govern a consumer fitness app’s data collection and use.
Question 140:
A social media company wants to use facial recognition technology to automatically tag users in photos. Under Illinois’ Biometric Information Privacy Act (BIPA), what is REQUIRED?
A) Obtain written consent and provide specific disclosures before collecting biometric data
B) Only notify users in the privacy policy
C) Register with the state privacy authority
D) Conduct annual security audits
Answer: A
The correct answer is option A. Illinois’ Biometric Information Privacy Act requires private entities collecting biometric identifiers (including facial recognition data) to obtain informed written consent from individuals before collecting their biometric information, after providing specific disclosures about the collection, purpose, and retention period.
BIPA establishes strict requirements for biometric data collection including written policies establishing retention schedules and destruction guidelines for biometric data, informed written consent before collecting biometric identifiers or information, disclosure of the specific purpose and length of time biometric data will be collected, stored, and used, prohibition on profiting from biometric data without consent, reasonable security measures protecting biometric information, and prohibition on selling, leasing, or trading biometric data. BIPA defines biometric identifiers broadly to include retina or iris scans, fingerprints, voiceprints, hand or face geometry, and other biological characteristics used to identify individuals. The Act provides private right of action allowing individuals to sue for violations, with liquidated damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation, plus attorneys’ fees. This private enforcement mechanism has resulted in substantial class action settlements, including a $650 million settlement with Facebook for photo tagging features using facial recognition without proper BIPA compliance. Companies operating in Illinois or serving Illinois residents should carefully evaluate whether technologies collect biometric information under BIPA’s definition, obtain compliant written consent before collection with specific required disclosures, implement biometric data retention and destruction policies, maintain reasonable security for biometric information, and avoid selling or profiting from biometric data. BIPA’s strict requirements and private right of action make it one of the strongest biometric privacy laws in the U.S., and several other states have enacted or proposed similar legislation.
Option B is incorrect because BIPA requires more than privacy policy notification. Companies must obtain informed written consent after providing specific disclosures before collecting biometric data, not simply mention collection in general privacy policies.
Option C is incorrect because BIPA doesn’t require registration with a state privacy authority. The Act focuses on consent, disclosure, and security requirements rather than regulatory registration or licensing systems.
Option D is incorrect because while BIPA requires reasonable security measures protecting biometric information, it doesn’t specifically mandate annual security audits. Companies must implement appropriate security but audit frequency isn’t prescribed by statute.