106. Section 1.9 Starts….
Great. So now we move to subsection 1, where we have to understand and implement infrastructure monitoring protocols such as NetFlow and Span. Now, for this particular subsection, I’ve included five upcoming videos. In those videos, you’ll first learn about NetFlow, then about NetFlow configuration, and finally about NetFlow flexible NetFlow. The following section will explain the span RSS, span, and erspan, as well as lab with respect to span. Okay? So please go and watch these five upcoming videos, and then we will be completing our subsection nine. After those five videos, obviously, we’ll go and start subsection 110, and that will be the last section in the first module, okay?
107. Understand Net flow
Now, as technology has advanced and we need to analyse more and more data for security, troubleshooting, and network analysis, we must learn and understand the flow or record. So for that reason, we have NetFlow. Again, if you look at the NetFlow versions, you’ll notice that we have NetFlow from version one to version 95, which is the most important and widely used. Now, what’s the difference? The main difference here is that in version one, we have less capability to analyse this number of fields in the flow of the traffic flow than in version nine, where you have much more capability to analyse the flow, which means we have much more options and much more capability to see inside the flow and what types of headers and fields we have. We’ll see how many fields we can analyse in different versions of your flow. Now we can see that, for example, you have a source, a destination, and you are using NetFlow.
So, what are the things you can look into? Again, we’ll see that we have multiple options to verify, but at least we can go and check the IP addresses, source and destination, the port numbers, source and destination, port protocol, packet sent, packet received, if you have any tagging related to source and destination, and then various TCP flags. This information is now being used because it will be used to answer five different WS. So who is sending means what is the source and destination IP, what is the port, where it is sending, what is the destination IP, so who and where are we talking about? Again, related to the addresses, what they are sending So, which application are they using, and what are their port numbers? So who, where, what, and how? Again, you can see how many packets or bytes they are sending. So you can see the packet counter, the byte counter, and when they’re sending, the time stamp for when this transaction occurred. So again, we are getting five answers: who, where, what, how, and when. And that’s important. It’s important to do the analysis with respect to either security or network analysis. If you want to compare this to the phone book, you’ll notice that NetFlow equals the flow within the flow: who, what, where, when, and all the other information we’re getting about flow. Now the next important question is: which version should we use? Again, we have versions one through nine, and version five is the most popular.
So here you can see, for example, in the popular version, which is version five, what the content is and again, the description. So we can go and check the addresses; we can check the next hop; the SNMP index packet flow; the total number of L; the three-byte system up time; and the TCP, UDP, and port numbers. If you have any flags, again, you can see this key information. Again, the argument is there. So still, version five is not giving us everything inside the flow. What version should we use if I want to see an increasing number of fields? So you can see in this chart that Version 5 only provides 18 exportable fields. There are some restrictions that must be met with IP before they can use a fixed level length single flow cache. Okay, so version five has some advantages, but it also has some disadvantages. Then we have version nine. Now version nine is template-based. It is much more robust. It may also include MPLS and BGP support fields. It is going to support 10 without failing. So we have 18 versus 10 with four failures. However, it has limitations in that IPV’s six flows are exported within IPV’s four fixed-length fields, which use more memory, have slower performance, and only a single-flow cache.
So you can see that you have ease of use. But on the other side, it is consuming much more resources. As a result, if it is simple to use, it has fewer fields. If you use more robust options, you will have higher memory requirements, slower performance, and so on. Okay, again, we have the flexible NetFlow, which is again using version nine. So you can consider NetFlow version nine one or nine two to be flexible. This supports flow monitoring, selectable key fields for IPV 6, and the NBAR network-based application recognition protocol, which was created by Cisco specifically to analyse applications. And I have given so much theory related to NBAR in my SDWAN curriculum because if it is a Cisco device, such as an ISR or any type of Cisco device at the time, NBAR can recognise the metadata of the application and can recognise up to 1400+ applications rather than the other vendors. Also, they are claiming that they can recognise more and more. But again, we can recognise the application with the NBA, and in this network flow, these are supported.
108. Net flow Configuration
In this session, we will learn about the terminology and how to collect all of these pieces before configuring the net flow. So let’s start. First of all, we know that we are going to analyse the traffic flow. So whenever I have a flow, obviously that particular traffic flow or flow has various types of keys and no keys. So what does it mean by “key” and “non-key”? We will go and understand that, but at this point in time, we can think that there may be some mandatory things and there may be some optional things that we want to collect or measure, correct?
So according to that, we can categorise these as key and nonkey that we’ll see starting with flow; they have the key field like source IP, destination IP, port number, protocol, et cetera. Then we have the flow record. For the record, you can consider an office and file system. In a file, you are putting all the records of all the customers or clients. So like that, you may have recorded what you want to collect, and this record may be key values, this record may be non-key values, or a mix of both, correct? So generally, it is the key plus any non-key value that is there in the record. Then we have a flow collector where you want to collect this flow. What is your destination server that we can go to and give the collector IP template-based configuration that we are using nowadays, even if you go and check the SDWAN configuration inside SDWAN? So both things are there; we need to create the template as well.
The template is the format in which you enter various parameters. Then the exporter is nothing, but you are going to collect the information, and you have the exporter where you want to analyse all these things. Finally, the net flow generator, which is nothing but who is going to generate the flow, is the device for which we want to analyse the flow. Okay, now let’s go over all of these terms again. When we do the configuration, or at least when we see the configuration, we’ll see where we’ll use all of these configuration options. The traditional NetFlow configuration is very easy and straightforward. What you have to do is go and, first of all, define the export version, then identify your destination and the port number.
So the destination IP and the port number This IP is your NetFlow collector IP, or maybe any type of server whose interface you want to enable. I can go and use IP flow in egress. I can go and use IP flow egress, so both options are there. Let me quickly log into the device and demonstrate how you can perform these configurations. So here I am. I can go to the flow export version, and here you can see that we have ones, five, and nine. I can go and use five, then I pay flow export destination. I can go and give 192, 1, 6, 8, or 56; that’s my server, and then I can go and give the port number, say 200:55. Then I can go and check the interfaces. So I have an interface, say, 15; I can go and use IP flow, and here you can see that we have the option. First and foremost, I should make this an IP interface because it was the L2 interface by default, and then I can use IP flow ingest and egress if I want.
We can go and check the configuration, and if you want to filter, you can use the flow option. So we can see that doing the configuration for traditional NetFlow is very simple and straightforward. Now the next very important thing we have is flexible NetFlow. Most of the time, you’ll notice that we’re using flexible NetFlow, and the key term there is that you should understand what your key item is versus what your non-key item is. So we have mandatory and optional items; anything beginning with matchoption is the key field. So toss type of service, protocol, society, destination address, port number, et cetera. Then, anything being collected will begin with the collect option and be a non-key item. So TCP, flag, bite, long packet, absolute first, et cetera. Again, this is also if you want to do the configuration. So now you can go ahead and specify IP flow and the version you want to use. So, say, version nine of IP flow export, and then we can use flow, and let’s do the configuration for this as well, say, flow record, and we’re inside the sampler, say match.
So here we can see that with this particular version, and here you can see the version of the image, the twelve-four flexible NetFlow is not supported. So what we’ll do is check this configuration in the other version where we have FNX support. So here, you can see that we can go and create the flow record. Again, we learned that NBAR 2 is supported in version 2, and whenever we have this network-based application recognition software or feature nowadays, Cisco has learned the NBAR VM as well, so we can check inside the SDWAN software model inside Cisco software downloads. So we now have this feature to analyse the application and see how it analyses the application. As you can see, whenever traffic passes through the router or the device where you have enabled the NBAR capability, So suppose someone is surfing the website, and then the packet will go and hit this device, and NBar is enabled as per this record. So here you can see that the flow record has an IP before the source address and the destination address.
The key thing here is that they are matching the application name as well. So once they have the application matching criteria enabled inside the net flow, they can go and hit and match that application, and in the flow, certainly, you can go and see that you have the source IP, destination IP, everything, and then you have the application matching name as well. Okay, so that’s the power we have with the integration of NetFlow with flexible NetFlow and NetFlow with NBAR. So NBAR is giving us much more visibility inside the flow of that particular traffic. All right, so how can we go and configure You can see here that you can go and create the exporter, which is your destination where you want to export, and then match the key and non-key items like match and collect keyword. Then inside the monitor, you can call. 1 and 2. That is the flow exporter and flow record, and then obviously you should go and apply over the interface. So these are the steps that we can take and use to enable NetFlow. The point here is that at what point and how much matching statement we need, how much collecting statement we need, and that depends on what exactly we want to analyze, there are some use cases.
So, for example, in client-server communication, you can go and match the DSCP protocol addresses source for destination port and in the non-key item, the interface outputbytes packets plus the application name. So, here is the application name, and we have the account on resolution once more; this time, the NBA is issuing the capability of NBAR, which gives you visibility for the application to check the application flow as well. Now we have a complete replacement for IP accounting keywords where you can go and check which IPS are using how much volume. So you can see here that we can use this IP for DSCP bytes long, so that’s the counter bytes long, the counter packet long, and then they collect the application name, so we have application visibility, IP accounting, and DSCP matching. If you want to analyse the qSQ hierarchy for QS, this is the QS configuration. We already know about QS; we have learned about that. So you’ve got the parent policy and the child policy. Inside the child, you have the class map like this, and inside the class map, you are matching certain DSPs.
How can we match it again within the flow? in the optional field. We have options. We can go and collect the policy and qSQ and drop them again. This is just for reference. How are the fleet protocols matched with which type of FNF statement? So we can go ahead and collect the application called SNMPserver sender pop NNTP sip sip applications http application, and we have a long list of applications that match the criteria that we can check inside the flow. is working behind the scenes. You can see that in any of the flows, they will go and extract the URL, the agent, the application, refer the application to the http host; that’s the capability, and that’s the visibility we have with respect to NBAR. And finally, we have one example here of why you might want to have this record option. So here you can see this flow record and what’s in that record that you want to match. so you want to match. The data link mac, the data link mac VLAN TTO protocol, the suicide destination address, and finally the optional, which is the key and non-key option, which you can match.
109. Flexible Net flow
Now, let’s see. That’s how we can go and do the configuration for flexible NetFlow; I’m running 15 dot zero. So first of all, I can go and give the exporter version, and then we can go and create the flow. And then we have this exporter. So let’s do that. Say “flow,” exporter, my exporter. And then I can go and give the destination—my local server—as a destination. Then we should go and create the flow records, so for that we have a flow record, say my record, and then I can go and match various parameters, so what is the destination address then?
Match: these are the key values that we are matching here at the source address; again, you can see that with the match option, what are the things we have? So we have the option to match the application as well as the end bar, and I can go and use the match and then the application name, or we can leave it. The system will automatically understand that each application is going, and it will go and match again. You can see that inside the match you have application data, link flow interfaces, IPV4 and IPV6 routing, and the transport as well as the routing destination. You can see that we can go and match the stats and the traffic index as well for the BCP. So we have these robust matching options, and then we can use the collect inside collect, or we can use the counter for bytes or the counter for packets. You can see with the collect option that we can go and collect the application, and then the name now flow record field is already present. So already we have given this field in the key; it’s not required to give it in the non-key as we have already defined it in the key, but you can see that we have options related to the non-key item as well. Some of the options are there in key form as well. If you define something in the key that has a higher preference, it will not go and take in the non-key.
All right, so once we define the flow record, then we can go and create the monitor, and inside the monitor, we can call all these things that we have created earlier, which means we can go and call the exporter that. is my exporter. I can go ahead and call the record that is mine. And then finally, you have to go and apply this monitor over the interface so I can go to my interface, and then I can go and use this monitor word, which is my monitor, in the input direction, and it is taking a while because we have enabled the applications as well. When you enable the NBA, which is the application indirectly, it will go and take some time to apply it. We’ll get the line once it’s finished. So these are the things that we should go check and enable. So define the exporter and define the record. call inside the monitor and then apply it. You can go to the output as well, and then we can go and check. Show flow monitor. You can go ahead and name your monitor if you know what it’s called. Okay. Apart from that, we can go and verify the flow interface. If I know the interface name, I can go and check. Then we have the flow record. If you have multiple records, you can go and select and check this selectively. So this is the way that we can go ahead. Enable flexible NetFlow.
110. Capture the Packet over Data Plan SPAN RSPAN ERSPAN
The next important topic is that we capture the packet at the data plane label. For that, we are using the popular technique of a switch port analyzer.
This is nothing but a technique that we can use to mirror the port or to send one copy of the traffic that is going via the port to a certain destination. So you can see in the diagram that we have a source for this fan, which means that traffic is coming, and that traffic is what we want to copy and send to a specific destination. So for that, you have to define the source, and then you have to define the destination. Now that we’ve arrived at our destination, we can use a popular tool like wireshark to analyse the traffic and verify it in our lab section. We now have other options with span now that we have it in our tool belt. Assume we want to do or take a copy in the case of a remote packet, or we want to have a capture for a remote packet that is layer 2 extended. So in that case, we can go and use the remote span or remote switchboard analyzer. Again, you can see the same thing: you have the source, but you also have the destination, and you are extending over the layer 2 domain.
Then again, at the destination end, you have this leafing tool where you can go and analyse the packet. So we have options related to aspan that are also related to RSPAN. Now you can see the configuration, which is quite simple, and we will use this configuration for local sphering again. So you have to go and define the monitor session, which interface is your source, which interface is your destination, and that’s it. Then you can open this viewer and check the packets that are going through. Now, in the case of remote span, you have to go and define the VLAN for remote span, and again, you have two configurations. You have configuration related to the source, and then you have configuration related to the destination. So, at the label of RSPAN source, you must enable the monitor session one source and monitor session one destination once more. Then at the destination end, again, you have to define the source and the destination. a lot more than ever before. The last flavour we have is the encapsulated remote flavor. So you can see the difference between RSPAN and ErsFan here: one span section extends over layer two, and the other extends over layer three domain, correct? So in this case, for example, I’m using GRE as an encapsulation. Again, I have the span source and span destination, but these packages that I want to analyse are expanded over the GRE session.
111. SPAN Lab
As we discussed earlier, for ERSPAN, we are encapsulating the traffic over the source and destination IPs with the ERSPAN. So we can see the configuration for up to one and where we need to go to start the monitor session or type in the fan. We should define the source interface and the destination addresses from the other side, where I have the destination and where I have the wireshark at that side of the router. We should then go and give the span destination the destination interface and then the source span ID and the IP address.
So this way we can go and capture and then analyse the packets that are coming from the source where we are sending them to the destination where we have the sniffer, and there we can analyse the packet. Now in this lab, what I want to show you is how we can go and do the configuration for this fan. So one of the interfaces that we want to create is the mirror or copy of that, and we’ll go and send those copies to the server. So for example, let me show my diagram. Let me share my diagram. So here you can see that I have one switch, and then I have one host connected. What I will do with this particular interface So let me show you the interface once less than 15; this will be the source of the traffic, and then I will send this to one plus 14, which will become the destination for the traffic. So let me quickly open the CLI.
Meanwhile, you can see the configuration is very straightforward. For the switch, we have VLAN configured, and then for the host, that is also routed, but we are taking this as a host, and here you can see the IP and the configuration as well. Now if I ping from here to the switch, which is 192-16-8562, so you can see the reachability is there, then I can go to the switch and enable the line VTY, say login local. So, if you want, we can check the TCP packets as well, and then I should go ahead and give sayusername admin password admin; say enable password is also admin. So not only can we go and check the ping packets or ICMP packets, but we can also go and check the internet packets as well. Now, if you’re on the same line as VTYlevel, and you want to check the SSH traffic. So you can go here and do input output as an SSH, for example, and then configure some IP domain names.
And then you have to generate the crypto key and say, “Once I get the domain, once I have the key, once I have the transport input configured, then I can go and give the version as 2, so I can go and verify the SSH as well.” So let’s go here to switch number one and configure this monitor. Consider monitoring session one. What is your source? The source interface is Fast Ethernet 115, and likewise, we can go and give monitor session one. The destination interface is Fast Ethernet 114, where you have connected the sniffer. So here you can see the configuration again. We have seen that with different platforms, your configuration may vary, but the concept is the same. You are mirroring from one place and sending to the other place. Now that I have my wireshark, I can go ahead and start my capture, so I can continue without saving any other traffic capturing that has been started. Okay, so let’s go here and first do a ping for a larger number, and then I can go here and check to monitor session one. You can see that this is the configuration.
Now if I go to the wireshark here, you can see this is at the server end as per our diagram. We can see that we are sending this plan session and receiving packets. I can go in and stop this. Now you’re getting this ICMP request and response. If you want to filter it, you can go and filter, and you can analyse this. There’s no problem with that. Now let’s try to get some packets related to Tenet and SSS. Still, this thing is going on. I should go and stop this. So let me stop this sequence, and then if I do SSH 56, let me go here to the workshop. Here you can see that we have the SSH packets, that we can capture the encryption, and that everything is correct. So likewise, we can go and analyse all types of packets. So we’re inside switch number two, and you can see SSH and packet transmission going on. All right, so this is the way that we can go ahead and enable this plan, and then we can verify it as well.
112. Explain network assurance concepts such as streaming telemetry – software
Hi everyone. Welcome to the last section of this particular module. This section will teach us about network assurance concepts such as streaming telemetry. Now again, this is very important, and we’ll see here what the new innovations are that Cisco has done inside the switching platform, inside the Nexus platform. Let’s just start. So first of all, we know that it’s actually hard to troubleshoot, and often we are raising Cisco tackles. And when we are going on the call, first of all, they will ask you, “Do you have a proper network diagram?” Have you got access to the devices? Do you have an IP diagram? Do you have these features on? Is this specific device accessible from this? Do you have this port open? They will ask you basic questions, and in order to collect those basic questions, we cannot always be certain that the basic information requested by the tech engineers is correct. Still, we can provide on time, so an email chain will go on and on. Correct.
Keeping these things in mind, if we have a problem, it is, first and foremost, time-consuming. It will take some time to resolve that issue. Why? because we have a structured network. Assume you have a problem and multiple domains; perhaps you have a problem with wires or wireless plus LAN plus Van plus et cetera. So you are very much like a device specialist—even the engineer is specific as well. So you may have different engineers with different roles. You have to do some sort of collaboration to solve that issue. That’s correct. The bottom line is that we don’t have full network visibility, and we don’t have telemetry, or visibility, on the devices, so we don’t have the proper information from the devices to resolve the issue faster.
And that’s why I can go ahead and read all of these bullet points because you’ll assume everything is fine and correct that I’ve done; it was working for the last two or three years, then suddenly stopped working. Correct. Again, that will come into play from a design perspective as well. That is how your network scales and other things work. But still, in that case, do you also have capacity management things going on, et cetera? That means you have full visibility. The answer is no. Still, we don’t have full visibility. But in the modern network, we have that feature—the telemetry feature on the devices. Now, let’s talk about what this telemetry is doing. So you have your network devices. Correct. And from these network devices, once you enable the telemetry feature, they will send—obviously, you have some sort of subscriber, you have some sort of publisher, etcetera.
So, in summary, you’re sending correct network visibility information to the server and storage. And then, within that entire framework, you have an analytic engine that does the storage, the analytic, and then you have a nice GUI that can present you static information. That’s the overall cycle of the telemetry. And later in video number two, you’ll find out what telemetry solutions Cisco is providing. Now, before going there, let’s again try to understand it a little bit deeper inside telemetry, and let’s see how it is working. So here you can see that the telemetry that the technology that telemetry is using is doing is nothing. on the other hand,. However, things have changed recently, and we will examine why they will change and why they have changed in the next slide. But times have changed, and it is now dynamic; it is fast, large, and so on. So to understand those things, to incorporate those things, and to capture that information, we also need some sort of fast processing software or processing technology as well.
And that is the correct method that telemetry will provide us with again; we will see this in the upcoming slides, where you can see that it will use the push method; that is where the delivery is efficient. Then we have different types of encoding and transport methods, and that means we have new types of tools and integration software available on the market. And finally, the data model is changing. I have taken complete automation courses, and there we have discussed more and more about data modeling. And in this course also, in the last section, number five, you will see information about data modules. However, because we have young modules and different types of data modules used, it is actually simple for current hardware and software to send or push those information using various types of programmable architectures. Great. So up to this point, we understood that telemetry would go and solve your problem because you were sending the data periodically to the data store and the analytics engine, and then you were getting the useful information to resolve that issue quickly. Correct? Now you can see the trend.
I told you that we want real-time static data. We have centralised software for that. We need the speed, and we are scaling. Now we’re in the DNA-like distil network architecture environment, where there aren’t many network-capable devices. So-called network-capable devices are not networking devices, but they can understand the network, like IoT devices, so many handheld devices, mobile devices, etc. So you’ll need some software that can understand these massive amounts of data. Correct? So for data centre visibility, we have the use cases you want to check on the network, like CPU memory, the forward table, utilization, protocol state, events, environment data, et cetera. We want to measure the path latency, their measurement, and the network performance, like buffer monitoring, microbial detection, et cetera. These are common things. As a network engineer, we get tickets related to these categories every day, and if it is a high CPU, we use high CPU-related troubleshooting.
If we find some applications dropping latency, we are doing trace routes and trying to figure out those things, etc. is correct, but we do have the use cases, which you’ll find one by one on the next four or five slides. So let’s see about memory-related health information. So we are taking the TCAM utilization, memory, power, temperature, CPU environment, test status, etc. For correct. So this is related to environment information, then protocol state, so is my neighbour up or down, what is the failure scenario, routing is working, and so on. Or the buffer-related information is that you have your buffer full and packages are dropping input, outdoor input, output drops are happening, et cetera. Correct latency Do you have any issues on the path? Do you have a multi-homing path or multi-pathing architecture? From server to survey, what is your application’s performance? or from client to server, what’s the performance? et cetera. SLA: is it measurable or not? We have load balancers in between.
What’s the performance of the load balancer? Now, for all of these traditional issues that you see here, we need to get accurate information. We need all of the information we can get, and we’ll figure out where the problem is in minutes or seconds. Alternatively, if you have a nice dashboard where you can go click and get, oh, this. Issue. This is the pathway, this is the QS, this is the Acker issue, this is the packet drop issue, this is the protocol issue, et cetera. Correct? and the answer is yes. If we go and enable the telemetry inside the devices, then those devices will send the telemetry information to the software, where I can go and check all this information. Not only can I check, but some tools are available to help us remediate and resolve the issues. Correct? And in the same way, we have two types of telemetry: one is software, and the other is hardware. First and foremost, let’s talk about the software.
Telemetry software means your control plane-related telemetry, and while discussing this control plane-related telemetry, I just wanted to make things easy. So I just wanted to tell you that you need at least two things. You need encoding, how you’re encoding the things, and then you need a transport protocol once you have the information, say, the telemetry information, and then how you’re doing the encoding of that information and how you’re sending it. In that regard, the data models will come into the picture. If we have API-rich data models, we know that today all SDN solutions are embedded with APIs, and the API is simple to use. Now, let’s try to understand that. So in the very top view, what is happening is that you are getting the telemetry information, and then you have the encoding, and then you have the transport. Encoding can be done with JSON, or you can use protohub Google, protohub GPB, and transport can be done with HTTP, UDP, gRPC, and so on. Correct? Now, I have one complete chart here, and in that chart, you’ll understand which device is using the DME data management engine. Which device is supporting the NX API to do the telemetry? Now, the Nexus 5 and 6 are the devices. They are not supporting either DME or the Nexus API.
However, you can see that all nine Kdevices support both DME and NX API. Actually, we are focusing, or we should focus more, on DME-type telemetry because that is the modern way. NX API is another protocol that Cisco has previously introduced. Let’s go back, and let’s be here. So, two things here: while you’re collecting telemetry information, you have the encoding method and you have the transport method. Correct? Great. How to do the configuration Again, you can see it’s very easy and straightforward. But here, you can see that we are using DME with GPB. That proto buff can be seen over the transport GRPC. So this is the encoding. If you look at the configurations for telemetry destination group one IPRs’ destination IP addresses, you’ll notice that one port is the 50101 protocol. So here, you can see that protocol. This is the transport, and this is the encoding. So this portion is your actual transport and then encoding; this is DME with GPV and gRPC configuration. Likewise, here you can see that your transport is using IS, TTP, and then encoding areas in JSON just to send the telemetry information to the destination. Now, because we’re talking about software telemetry at the moment, software telemetry is structured; it’s not like hardware telemetry, because hardware is in the data plane.
So you have a lot of data with data-plane telemetry. So, in the next video, we’ll talk about how we can parse those data points. But here, the software telemetry will provide you with visibility to the control plane protocol state information, information counters, environmental information, info encounters, et rmations, we’llData plane visibility is limited, as is noflow visibility, and so on. So the point here is that the software telemetry is good and structured, unlike the data telemetry that we’ll discuss in the next video. But the point here is that you have your telemetry information. You’re using these encoding and transport methods to send this to the destination. They will be processed and given useful information once they arrive at their destination. All right, so let’s stop, and next session we’ll go and learn about ASIC-specific telemetry. That is nothing but the data-plane type of telemetry.
113 Explain network assurance concepts such as streaming telemetry – hardware
This is part two of the telemetry section. In this section, I will understand hardware-specific telemetry and how hardware is directly equal to ASCII-related telemetry. We now understand how telemetry works, how information is gathered and then encoded and transported. Same thing is true with the hardware-related telemetry as well. From ASIC, we are getting the information, and then we are doing the encoding and transport to reach the destination where you have your software from where you are analysing the data. Now, in this section, we will test that software as well, and we will discuss it at the end of this video. Great. So, first and foremost, the issue with hardware-related or ASIC-related telemetry is that different types of hardware telemetry use different output formats, and it’s in bulk, so there’s a lot of information to process and no common format. That’s the biggest problem. Still, hardware telemetry or SIC-related telemetry is being developed; it is evolving and becoming better and better. That’s one very important thing.
Now, second thing is that whattype of hardware telemetry we have. So first of all, you’ll find that you’ll get the flow table information and which type of hardware they’re supporting. This hardware is where you can go to collect information about the five-tile interface queue, info, flow start stop, time flow latency, and so on. That’s the kind of output we get from Ex FX and FX 2. Likewise, from FX and FX 2, we can go and get the flow table events as well. If you have a time trigger, you will receive an event related to the same thing, such as a five-table interface, buffer, echo, drop latency, see, and so on. It’s like event-based information you are getting, but still, we are getting the bulk of the information. Now, if you want some sort of targeted information or some sort of unified type of information, then we have the third option. That’s the buffer and queue telemetry here; the user-defined streaming parameters can be set, and you can see the method that, directly from Essex, we are getting the information or the telemetry information as a buffer and queue engine, and then it gets exported to the next handler or next handle. Correct. So we have three: you can see the flow table, flow table events, and then buffering and queue telemetry, the supported hardware. Again, the 9364 C and the 939300 FX two are visible.
So we are directly exporting from Essex. Great. So let’s quickly check the differences between the software telemetry and the hardware telemetry. As you can see, software-related information control is concerned with information resource utilization, CPU memory protocol estates, and the Essex, whereas Q isCop is concerned with the line card, IO models, and the Essex. We are getting the flow telemetry buffer, queue telemetry latency, and drop notification. So far, we have discussed that telemetry can give you this information. Who will receive this information and how will it be processed? Let’s talk a little bit about their architecture as well. So what we are doing is doing that with the help of telemetry. We are doing the collection of the information, then we are putting it in the data store, and then we have the applications that can give you the visual information alert and some automation. But who are these telemetry destinations? Cisco has its own network insights. You can see that. Again, I will show you the next slide. That is what the components in the network are. But again, you can see the overall life cycle of the telemetry.
So you have a source of telemetry data, then you’re doing some sort of injection, you’re getting all that information, and you are doing the processing. Then we have insights to derive. There are certain algorithms, parameters, and correlation engines that will go and give you the inside information, and then you have the application that will give you the static information. Correct? So, looking at the entire lifecycle, you can see that both the ACI-based solution and the Nxos-based hardware are capable of getting the information, putting that information inside in the proper order, then deriving some logic from there, and then getting the report. Let’s move on to the next architecture.
So you can see that either you manage and monitor the latency network with DCNM or you have an SDK-based network that you manage and monitor with Epic. You have network insider access in both cases and can integrate with network insider resources. So NIA, the advisor, and the sources are there; they can be installed as an app, which will provide you with visibility, insight, and proactive troubleshooting. So, how should we proceed? You can obviously check with the Cisco account manager, but you can go to the DC app center, and he can install it. However, how to connect, what computer requirements there are, and all of that information will be provided either directly by Cisco or by the Eco account manager. Obviously, knowing your requirements and how you intend to use the telemetry allows you to use the network insights. All right, so these are the details related to network assurance with respect to Nexus switches or data centre switches. In addition to the telemetry options we have.