Building a Topological Diagram With opCharts

Prerequisites

Please ensure either opCharts or the Opmantek Virtual Machine are installed to use the below feature.

Overview

The Topological Diagram style of Map allows you to dynamically build live, informational diagrams based on the logical Layer 2 connections devices have.

A menu listing of all available Maps can be accessed by selecting Views -> Maps from the opCharts menu bar.

Creating a New Topological Diagram

Join Paul McClendon, an Opmantek Support Engineer, as he demonstrates how to create a topological map in opCharts.

For the letter lovers amongst us

A Topological Diagram must be created before it can be used or added to a Dashboard. To create a new Map, Click the blue button with the “+” icon in the top-left corner from the Maps screen (Views -> Maps).

newmap - 700
Next, select Topological Diagram from the Map Type drop-down located in the top-left corner.
Topological Map - 700

Assign your Topological Map a Map Name – This must be unique; no 2 maps can have the same Map Name. You can also provide a Description of your Map. This will be displayed on the Maps View page, and also when adding a Map to a Dashboard.

Options

Title – This is what will be displayed in the Component window’s title bar.

Background Image – Disabled for Topological Diagram style Maps

Layout – Provides auto arrangement of the icons and their connections. Each has pros and cons, depending on the network architecture, number of devices, and types of connections found

Apply Layout – Applies the currently selected layout to the Topological Map

Auto checkbox – When checked will automatically apply the selected Layout option to the mao and continue to update the layout as new nodes, neighbours, or subnets are added. Checked by default.

Add Node

The Add Node button allows you to add an individual Node to the Map. You may assign a Display Name, separate from the Node’s internal name, or leave this field blank and no label will be displayed. A specific icon may also be assigned or will be auto-selected from the built-in icon options based on the type of equipment.

Link to Map

If set, the Link to Map option will open a new URL when the link is clicked. You can select either a Map on the current server or by selecting Custom use any URL (even to other software/sites). This is especially powerful – allowing you to drill down from a top-level abstract diagram to more in-depth levels of detail.

By default, the Link to Map / Custom option opens the target in the current browser window. However, you can force opCharts to open the link in a new tab/window by enclosing the link URL in double quotes and following it with target=_blank i.e http://someserver.com//en/omk/opCharts/dashboards/myawesomedashboard” target=_blank

link_to_map (1)

Once the node is added it may be moved around the Map by left-clicking and dragging it into position.

Add Group

The Add Group button allows you to add all nodes contained within a Group at one time. The Display Name field has no effect on the individual nodes being added.

Add Link

Note: Links may be added manually. However, the true power of a Topological Map is in dynamically drawing the connections between devices and subnets. See Building the Topological Map below.

The Add Link button adds a physical connecting line between 2 Nodes or 2 Groups. You can assign the Link a Link Name, which will be displayed within a bordered box at the centre point of the line between the 2 Nodes. These links are convenient ways to show relationships between components, without linking those relationships to specific interfaces or data patterns. A link can be deleted by right-clicking on the link line and selecting Delete from the pop-up menu.

The-Link

Add Interface Link

The Add Interface Link button allows you to add an interactive Link representing an interface’s flow traffic between 2 Node or Group icons. Select your Link Source, the Node providing the Interface, the specific Interface that handles the link, and the Link Destination.

Interface Link - 700

The resulting link will be anchored to the 2 Nodes/Groups and display both the inbound and outbound link speeds as a percentage of the available interface speed. The link is also hinged in the middle, allowing some modicum of adjustment for readability.

link_sample (1)

Note: Link sources and Node/Interface is not required to be the same, the GUI fills out the node name as a suggestion as it’s the most likely scenario.  If required, the link source and/or link destination can be left blank and the endpoint will remain open for moving to a convenient location.

Add Placeholder

The Add Placeholder button allows you to add an icon to the Network Map that is not linked to a specific Node or Group (like “the Cloud”). Similar to both Nodes and Groups you can assign a Display Name, select a Display Icon, and Link the icon to another Dashboard.

Building the Topological Map

While you can manually add links and Interface Links to a Topological Map, the true power lies in using the logical information the network contains to create those connections.

Add Neighbours

Right-click on a node and select Add Neighbours.  Neighbours are direct connections found between devices but can also be virtual machines hosted by a VMware host.

add-neighbors

Add Subnets

Right-click on a node and select Add Subnets.  Subnets are a logical connection between nodes and not a direct “physical” connection but help to organise and understand logical layouts.

Editing a Node

Nodes on the Topological Map can be edited. Simply return to edit mode (open the Map by selecting Edit from the Map view or by clicking the Edit button in the top-right corner of the Component window) then RIGHT-click on the Node you want to edit, select Edit from the pop-up menu.

Uncategorized

Getting Metrics, Reachability, Availability from your Enterprise Network Monitoring System

Managing a large complex environment with ever-changing operational states is challenging, to assist, NMIS as a Network Management System which is performing performance management and fault management simultaneously monitors the health and operational status of devices and creates several individual metrics as well as an overall metric for each device. This article explains what those metrics are and what they mean.

Summary

Consider this in the context that a network device offers a service, the service it offers is connectivity, while a router or switch is up and all the interfaces are available, it is truly up, and when it has no CPU load it is healthy, as the interfaces get utilised and the CPU is busy, it has less capacity remaining. The following statistics are considered part of the health of the device:

  • Reachability – is it up or not;
  • Availability – interface availability of all interfaces which are supposed to be up;
  • Response Time;
  • CPU;
  • Memory;

All of these metrics are weighted and a health metric is created. This metric, when compared over time, should always indicate the relative health of the device. Interfaces which aren’t being used should be shut down so that the health metric remains realistic. The exact calculations can be seen in the runReach subroutine in nmis.pl.

Metric Details

Many people wanted network availability and many tools generated availability based on ping statistics and claimed success. This, however, was a poor solution, for example, the switch running the management server could be down and the management server would report that the whole network was down, which of course it wasn’t. OR worse, a device would be responding to a PING but many of its interfaces were down, so while it was reachable, it wasn’t really available.

So, it was determined that NMIS would use Reachability, Availability and Health to represent the network. Reachability being the pingability of device, Availability being (in the context of network gear) the interfaces which should be up, being up or not, e.g. interfaces which are “no shutdown” (ifAdminStatus = up) should be up, so a device with 10 interfaces of ifAdminStatus = up and ifOperStatus = up for 9 interfaces, the device would be 90% available.

Health is a composite metric, made up of many things depending on the device, router, CPU, memory. Something interesting here is that part of the health is made up of an inverse of interface utilisation, so an interface which has no utilisation will have a high health component, an interface which is highly utilised will reduce that metric. So the health is a reflection of load on the device and will be very dynamic.

The overall metric of a device is a composite metric made up of weighted values of the other metrics being collected. The formula for this is configurable so you can weight Reachability to be higher than it currently is, or lower, your choice.

Availability, ifAdminStatus and ifOperStatus

Availability is the interface availability, which is reflected in the SNMP metric ifOperStatus if an interface is ifAdminStatus = up and the ifOperStatus = up that is 100% for that interface if a device has 10 interfaces and all are ifAdminStatus = up and the ifOperStatus = up that is 100% for the device

If a device has 9 interfaces ifAdminStatus = up and the ifOperStatus = up and 1 interface ifAdminStatus = up and the ifOperStatus = down, that is 90% availability it is the availability of the network services which the router/switch offers

Configuring Metrics Weights

In the NMIS configuration, Config.nmis there are several configuration items for the these are as follows:
'metrics' => {
'weight_availability' => '0.1',
'weight_cpu' => '0.2',
'weight_int' => '0.3',
'weight_mem' => '0.1',
'weight_response' => '0.2',
'weight_reachability' => '0.1',
'metric_health' => '0.4',
'metric_availability' => '0.2',
'metric_reachability' => '0.4',
'average_decimals' => '2',
'average_diff' => '0.1',
},

The health metric uses items starting with “weight_” to weight the values into the health metric. The overall metric combines health, availability and reachability into a single metric for each device and for each group and ultimately the entire network.

If more weight should be given to interface utilisation and less to interface availability, these metrics can be tuned, for example, weight_availability could become 0.05 and weight_int could become 0.25, the resulting weights (weight_*) should add up to 100.

Other Metrics Configuration Options

Introduced in NMIS 8.5.2G are some additional configuration options to help how this all works, and to make it more or less responsive. The first two options are metric_comparison_first_period and metric_comparison_second_period, which are by default -8 hours and -16 hours.

These are the two main variables which control the comparisons you see in NMIS, the real-time health baselining. These two options will be calculations made from time now to time metric_comparison_first_period (8 hours ago) to calculations made from metric_comparison_first_period (8 hours ago) to metric_comparison_second_period (16 hours ago).

This means NMIS is comparing in real-time data from the last hour 8 hours to the 8 hour period before that. You can make this smaller or longer periods of time. In the lab I am running -4 hours and -8 hours, which makes the metrics a little more responsive to load and change.

The other new configuration option is metric_int_utilisation_above which is -1 by default. This means that interfaces with 0 (zero) utilisation will be counted into the overall interface utilisation metrics. So if you have a switch with 48 interfaces all active but basically no utilisation and two uplinks with 5 to 10% load, the average utilisation of the 48 interfaces is very low, so now we pick the highest of input and output utilisation and only add interfaces with utilisation above this configured amount, setting to 0.5 should produce more dynamic health metrics.

Metric Calculations Examples

Health Example

At the completion of a poll cycle for a node, some health metrics which have been cached are ready for calculating the health metric of a node, so let’s say the results for a router were:

  • CPU = 20%
  • Availability = 90%
  • All Interface Utilisation = 10%
  • Memory Free = 20%
  • Response Time = 50ms
  • Reachability = 100%

The first step is that the measured values are weighted so that they can be compared correctly. So if the CPU load is 20%, the weight for the health calculation will become 90%, if the response time is 100ms it will become 100%, but a response time of 500ms would become 60%, there is a subroutine weightResponseTime for this calculation.

So the weighted values would become:

  • Weighted CPU = 90%
  • Weighted Availability = 90% (does not require weighting, already in % where 100% is good)
  • Weighted Interface Utilisation = 90% (100 less the actual total interface utilisation)
  • Weighted Memory = 60%
  • Weighted Response Time = 100%
  • Weighted Reachability = 100% (does not require weighting, already in % where 100% is good)

NB. For servers, the interface weight is divided by two and used equally for interface utilisation and disk free.

These values are now dropped into the final calculation:

weight_cpu * 90 + weight_availability * 90 + weight_int * 90 + weight_mem * 60 + weight_response * 100 + weight_reachability * 100

which becomes “0.2 * 90 + 0.1 * 90 + 0.3 * 90 + 0.1 * 60 + 0.2 * 100 + 0.1 * 100” resulting in 90% for the health metric

The calculations can be seen in the collect debug, nmis.pl type=collect node=<NODENAME> debug=true
09:08:36 runReach, Starting node meatball, type=router
09:08:36 runReach, Outage for meatball is
09:08:36 runReach, Getting Interface Utilisation Health
09:08:36 runReach, Intf Summary in=0.00 out=0.00 intsumm=200 count=1
09:08:36 runReach, Intf Summary in=0.06 out=0.55 intsumm=399.39 count=2
09:08:36 runReach, Intf Summary in=8.47 out=5.81 intsumm=585.11 count=3
09:08:36 runReach, Intf Summary in=0.00 out=0.00 intsumm=785.11 count=4
09:08:36 runReach, Intf Summary in=0.06 out=0.56 intsumm=984.49 count=5
09:08:36 runReach, Intf Summary in=0.00 out=0.00 intsumm=1184.49 count=6
09:08:36 runReach, Intf Summary in=8.47 out=6.66 intsumm=1369.36 count=7
09:08:36 runReach, Intf Summary in=0.05 out=0.56 intsumm=1568.75 count=8
09:08:36 runReach, Calculation of health=96.11
09:08:36 runReach, Reachability and Metric Stats Summary
09:08:36 runReach, collect=true (Node table)
09:08:36 runReach, ping=100 (normalised)
09:08:36 runReach, cpuWeight=90 (normalised)
09:08:36 runReach, memWeight=100 (normalised)
09:08:36 runReach, intWeight=98.05 (100 less the actual total interface utilisation)
09:08:36 runReach, responseWeight=100 (normalised)
09:08:36 runReach, total number of interfaces=24
09:08:36 runReach, total number of interfaces up=7
09:08:36 runReach, total number of interfaces collected=8
09:08:36 runReach, total number of interfaces coll. up=6
09:08:36 runReach, availability=75
09:08:36 runReach, cpu=13
09:08:36 runReach, disk=0
09:08:36 runReach, health=96.11
09:08:36 runReach, intfColUp=6
09:08:36 runReach, intfCollect=8
09:08:36 runReach, intfTotal=24
09:08:36 runReach, intfUp=7
09:08:36 runReach, loss=0
09:08:36 runReach, mem=61.5342941922784
09:08:36 runReach, operCount=8
09:08:36 runReach, operStatus=600
09:08:36 runReach, reachability=100
09:08:36 runReach, responsetime=1.32

Metric Example

The metric calculations are much more straight forward, these calculations are done in a subroutine called getGroupSummary in NMIS.pm, for each node the availability, reachability and health are extracted from the nodes “reach” RRD file, and then weighted according to the configuration weights.

So based on our example before, the node would have the following values:

  • Health = 90%
  • Availability = 90%
  • Reachability = 100%

The formula would become, “metric_health * 90 + metric_availability * 90 + metric_reachability * 100”, resulting in “0.4 * 90 + 0.2 * 90 + 0.4 * 100 = 94”, So a metric of 94 for this node, which is averaged with all the other nodes in this group, or the whole network to result in the metric for each group and the entire network.

Uncategorized

Understanding the NMIS KPI interface

What is a KPI and why is it relevant it for network monitoring?

Key Performance Indicators (KPIs) were introduced into NMIS to provide insight as to why the health of a node was getting better or worse.  As discussed in the article on NMIS Metrics, Reachability, Availability and Health, NMIS is tracking the health of a node and providing a single number which indicates what the health of a node is, this is called the Health Metric.  To make up the Health Metric, NMIS is tracking many aspects of a node’s health including:

  • Reachability – Node availability or pingability
  • Availability – Interface availability
  • Response time
  • CPU Utilisation
  • Memory Utilisation
  • Interface Utilisation
  • Disk Utilisation
  • Swap Utilisation

NOTE: Not all nodes have disk and swap, so for some nodes these values are blank, e.g. a Cisco Router will have no value for disk and swap KPI’s.

NMIS has a history of being a Network Management System, the generation of the Metrics and KPI’s is something that makes NMIS more than a Network Monitoring System and helps IT professionals by providing better information about their environment to help with their decisions. By giving users more information about devices, troubleshooting or improving the health of devices is much easier.

As of NMIS 8.5G, we started storing the individual KPI scores so that it was possible to see the health metric break down over time.  This is now shown at the top of a node view panel in NMIS8 and looks like the image below.

KPI Scores

You can think of the KPI Scores like a report card, the student (node) has received 10/10 for English (reachability), 10/10 for Maths (availability) and so on. The KPI Scores in the screenshot above come from the polled data and are scored out of the weighted value, this weighted value is a percentage, so in the configuration file, it is 0.1 which means it is 10% or a maximum possible KPI score of 10/10.  The table below shows the configuration value and the resulting KPI Score value.

KPI Item Configuration Item Configured Weighting Maximum KPI Score
Reachability weight_reachability 0.1 10 (10%)
Availability weight_availability 0.1 10 (10%)
Response weight_response 0.2 20 (20%)
CPU weight_cpu 0.2 20 (20%)
Memory weight_mem 0.1 10 (10%)
Interface weight_int 0.3 30 (30%)

Because they are not present in all node types, there are two additional KPI values which overload onto the Memory and Interface KPI values these are, Swap and Disk, these split the weighting of each into half and track that separately, e.g. Interface KPI by default is 30%, so when the Disk KPI is present the Interface KPI gets a value of 15% and the Disk KPI gets a value of 15%.  So the table would like like this when all 8 KPI’s are present, as they are for Linux Servers.

KPI Item Configuration Item Configured Weighting Maximum KPI Score
Reachability weight_reachability 0.1 10 (10%)
Availability weight_availability 0.1 10 (10%)
Response weight_response 0.2 20 (20%)
CPU weight_cpu 0.2 20 (20%)
Memory weight_mem 0.1 x 50% 5 (5%)
Swap weight_mem 0.1 x 50% 5 (5%)
Interface weight_int 0.3 x 50% 15 (15%)
Disk weight_int 0.3 x 50% 15 (15%)

The result is that the maximum KPI Score for a node will be 100 or 100%.

Interpreting Health and KPI Values

So you are looking at the main NMIS dashboard and you see that a node has a Health score of 92.2% as the example below, there is also a red arrow beside that, which is the result of the longstanding NMIS feature for auto baselining, this red arrow is pointing down, meaning that the health now is lower than the last period. So WHY is this node less healthy now than it was before, clicking on the node will reveal the KPI scores and you can start looking at what is changing.

You see this KPI summary again, you can see the overall breakdown of the health metric represented in the KPI values and you can see that the MEM KPI has a red arrow pointing down, the auto baselining is showing us that the Memory score is lower than previously with a score of 2.04 out of a possible score of 5.  If we look at the graph for the last 2 days, we can see that the average value for the MEM KPI is 2.28%, showing us that the memory utilisation has increased a little.
If you want to know WHY the health from the front page is 92.2% we can look at all the KPI values, like the Disk KPI of 10.50/15, CPU KPI is 19.98/20 and SWAP KPI is 4.75/5, we can take 100% and subtract the remainders so:
KPI Item KPI Score Remainder Calculation Health Remainders
Reachability 10/10 10 – 10 0
Availability 10/10 10 – 10 0
Response 20/20 20 – 20 0
CPU 19.98/20 20 – 19.98 0.02
Memory 2.04/5 5 – 2.04 2.96
Swap 4.75/5 5 – 4.75 0.25
Interface 15/15 15 – 15 0
Disk 10.5/15 15 – 10.5 4.5

Adding together the Health Remainder results and subtracting from 100 gives us: 100 – (0.02 + 2.96 + 0.25 + 4.5) = 92.27%

The difference between the result and the displayed numbers are rounding precision.

Conclusion

NMIS KPI Scores are a powerful way to get to the bottom of the health of your infrastructure, they will assist to see where resources are being used and assist to identify operational problems very fast.

Uncategorized

Open Source Software and Chilli Con Carne

I am a big proponent of Open Source Software and all the things it has delivered for individuals, organisations and society. Where would we be today if it wasn’t for GNU, Linux, Apache, MySQL (MariaDB), MongoDB, JavaScript, JQuery, Perl, Python and PHP, not to mention NMIS and Open-AudIT?

 

These and so many earlier Open Source projects were foundational and fundamental to the Internet as it grew and have been the Grand Parents, Uncles and Aunts of the more recent explosion of Open Source projects based around new innovations which would have only been created because of this heritage.

The classic birth of an Open Source project is, “well I really like this (software|language|database) but it does not meet all my requirements, I think I will write one” or “I have this problem and nothing existing really solves this problem the way I need it to, I think I will write one” or even better “this open source (software|language|database) is so good, how could I help to make it better”.

Open Source isn’t all about writing code, people can contribute in all kinds of ways, including testing, documentation, project management, requirements analysis and so much more.

Ultimately for me, Open Source Software is the awesome result of people with diverse backgrounds, skills, experiences and probably most importantly requirements working together to create a solution which embodies the definition of synergy. The result is something which is more generally useful to more people, because of the diversity of this input.

Which brings me to Chilli Con Carne, I love Mexican food, as soon as I first went to Montezuma’s Restaurant in Taringa Queensland as a teenager I have loved Mexican food. From travelling to the USA and then living in California for a while, I learnt about the different types of Mexican, how different Tex-Mex is to Mexican food. More recent trips to Mexico I have learnt how awesome and diverse Mexican cuisine is.

But Chilli Con Carne is not Mexican, it is really Tex-Mex and for me it also brings some of the slow food movement ideas by cooking what you need, using local produce in a traditional way.

I have been cooking Mexican food for years using meal kits and finally, I decided I could do better by doing something myself, so with the help of YouTube and Jamie Oliver, I found a great recipe, which I adapted to what I had and it produced an awesome result.

I was talking to my Opmantek colleagues about it, and they contributed some “code changes” to make it better MarkD suggested smoked paprika instead of paprika, that was an amazing improvement, MarkH sent his Chilli Con Carne recipe and I adopted the brown sugar and chocolate, this added a richness and smoothness to the dish.

Cooking is the ultimate in iterative development, cook, test, taste, improve, repeat. The current iteration of my Chilli Con Carne recipe is included below and it keeps changing and developing as I get new ideas and input from others.

For me, Chilli Con Carne is just like Open Source Software, the product of synergy.

Open Sauce Chilli Con Carne Recipe

I would call this a mild recipe, my kids have eaten this no problem, adding more chilli flakes or using hotter chilli’s would make this as hot as taste prefers.

This batch makes enough to feed 8 with some leftovers, I usually cook a big batch and freeze some convenient meals later.

Ingredients

Mexi Spice Mix

  • 3 teaspoons smoked paprika
  • 3 teaspoons of cumin
  • 2 teaspoons of dried oregano
  • Pinch salt
  • Pinch pepper
  • Lemon zest
  • Juice from lemon

Vegetables Chopped Roughly

  • 2 rough cut onions
  • 1-2 red capsicums (bell peppers)
  • 1-2 yellow capsicum (bell peppers)
  • 1-2 green capsicum (bell peppers)

Chilli’s cut up fine and remove seeds

(Leave the seeds in if you want some more heat)

  • 1 large Poblano chilli
  • OR 2 Aussie green chilli
  • OR your favourite chilli’s

Other things to add

  • 2 tins tomatoes
  • 1/2 tin water, use water from beans
  • Coriander (cilantro)
  • 1 cinnamon stick
  • 2 tins black beans including water
  • 2 tins red kidney beans
  • 1 tablespoon light brown sugar (optional)
  • 60 grams unsweetened baking choc pieces (optional)
  • 4 teaspoons hot chilli flakes

Butcher

  • 1.4kg beef chunks

 

Preparation

Marinate the Meat

Make the Mexi Spice Mix, combine with meat make sure it is really spread through all the meat. Leave to marinate in the fridge for as long as you have time for, overnight is good, an hour or so is ok.

Cooking

If you don’t have time to marinade that is OK, just prepare the same way and straight into the pan.

I cook using a large electric fry pan, which works well and I can leave it cooking overnight if I have time.

The intense part (10-15 mins)

  • Hi heat
  • Braised beef on the stove top
  • If not already marinated add in Mexi spices
  • Add in veggies, then tomatoes and black beans and chillis
  • Break cinnamon stick

The easy part (~60 mins)

If you want the chilli thicker, cook uncovered, if you want it thinner, keep it covered.

  • Reduced to cook for 15mins (level 9 180C)
  • Stir and cook for another 15 mins
  • Reduce heat to simmer and check after 15 mins
  • Reduce heat as needed and check every 15 mins

Extra flavour as needed

While cooking check flavour and add as taste proscribes, but add in small doses, stir through and taste again after 10-15 mins.

  • 1 teaspoon hot chilli flakes
  • 1 teaspoon of cumin
  • 1 teaspoon smoked paprika

The relaxing part (as long as you have time for)

  • Cover the dish
  • Reduce heat to a low simmer, probably the lowest setting you have
  • Leave for as long as you can, 2 hours good, 4 hours better, leaving overnight is awesome
  • Keep an eye on total moisture.

Soupy Tip

If too soupy, scoop off some of the liquid and keep as a soup, you can add beans to it and cook it up a little longer, but so much flavour in that soup.

Serving

Serve as you like, in a bowl, cover in cheese and add some sour cream, accompanied by corn chips is pretty good.

If you prefer a thicker chilli, serve in soft tacos or burrito wraps.

Enjoy.

Uncategorized

Testing Open-AudIT’s Discovery System

I have been asked numerous times to list exactly what Open-AudIT can discover, answering “everything’ causes people to doubt the ability of the product. It is almost as if we need to limit the discussion on the features to make it a believable product. That is not something I would ever like to do. So instead of bringing the product down, I thought I would demonstrate how to test the power of Open-AudIT quickly and easily.

To accomplish this, I imported the Opmantek Virtual Machine into VirtualBox and accessed my install from my browser. (More on that is available here if needed).

Opmantek Virtual Machine Page - 700
From here I clicked the Open-AudIT Community button and by clicking the “Audit this PC” button a shell script is downloaded.
Open-AudIT Login screen - 700

I ran this script from my terminal and it outputted a .xml file that had the audit results. I logged into Open-AudIT (The default username/password is in the blue banner) and was able to import the results of the audit directly.

Open-AudIT Import Device Audit - 700

After this was accomplished I navigated to the device list and can see my computer in the list that may or may not be named after Shaq.

Open-AudIT Audit Results 1 - 700

And a great indication of the software that is discovered from this process as well.

Open-AudIT Software Discovered - 700

That was a really quick way to test the functionality of Open-AudIT with your own machine. The best part of this is you can do this from any machine if you are using VirtualBox or a similar provider. Test this on your device today and you will see the amazing benefits of using this in your organization.

Uncategorized

Introducing opConfig’s Virtual Operator

Introduction

opConfig’s new Virtual Operator can be used to help create jobs comprised of commands sets that can be run on one/many nodes, reporting to see job results and troubleshooting to diagnose nodes, that raise conditions through opConfig’s plug-in system. Quick actions are templates that the virtual operator uses that saves you from having to constantly create commonly run jobs. It also gives operators easy access to run commands on remote systems without giving them full access to the machines.

New Virtual Operator Job

To create a new virtual operator job go to Virtual Operator menu option and click New Virtual Operator Job. You will need to select which nodes you are wanting to run commands on, these are auto-completed from the list of currently activated nodes in opConfig. Next, you can select which command sets should be run on the nodes, this is auto-completed from all command sets which opConfig has loaded. You can also use tags to select which command sets should be run. You can schedule this job to be run now or at a later time, by selecting later this will bring a time-picker to schedule when this job shall be run. A name is auto-generated from data you have already inputted but this can be amended to anything you desire. The details section is a free text field for keeping notes about this job. By clicking schedule this will add the Job to opConfig’s queue and take you to the report schedule.

opConfig New Virtual Operator Job - 700

Virtual Operator Report

A Virtual Operator Report is an aggregation of all data collected from your virtual operator job. On the left panel, you have meta-data about the job, how it was created, by whom and when it’s going to be run or when it was run. The commands panel is a paginated table of the successful commands which were run for the current job. If the command set is using a plug-in to show derived data or report conditions these results are shown inline by clicking the expand icon in the derived column. If the condition has a tag this can be used to help filter down command sets for creating linked virtual operator jobs off these conditions. All operations for the current job are shown to help diagnose connection or command issues that may have occurred.

opConfig Virtual Operator Result - 700

Virtual Operator Troubleshooting

If you have clicked the troubleshoot button from a report condition (see screenshot above for the green button), you are taken to the new virtual operator job screen, but there are a couple of key differences. The node has already been filled out and the command sets have been filtered down using a tag, in this example, we have three command sets with the tag disk. This can help to create workflows where conditions are tagged to limit what the operator can select for the next steps in the troubleshooting process. When this job is created the parent’s job ID is also recorded and the parent’s job name is shown in the newly created report.

opConfig Create Linked Job - 700

Virtual Operator Results & Schedules

There are two final pages that are new, one that shows all scheduled virtual operator jobs and one that shows completed virtual operator jobs. Scheduled shows user-created running jobs and ones which are scheduled in the future. Results show all the completed jobs which were user created and CLI run.

opConfig Virtual Operator Results View - 700

Quick Actions

Quick actions are templates for new virtual operator jobs, we have shipped four sample jobs but you can create your own. Clicking the quick action button will take you to a new virtual operator screen and fill out the specified fields. Create a new json file under
/usr/local/omk/conf/table_schemas/opConfig_action-elements.json
{

“name”: “IOS Hourly Collection”,

“description”: “Hourly baseline collection for Cisco IOS.”,

“command_sets”: [“IOS_DAILY”],

“buttonLabel”: “Collect Now”,

“buttonClass”: “btn-primary”

}

 

Key Datatype About
name string Name which is shown at the top of the quick action element
description string Text shown under the quick action name, useful to describe what the action does
command_sets array of strings Command set keys which you wish to be run
nodes array of strings Names of nodes which you wish the command sets to be run against
buttonLabel string Text of the run button
buttonClass string Css class applied to the button to colour it. btn-default, btn-primary (default), btn-success, bnt-warn, btn-danger

This is the final result of a dashboard that your organization could use today.

opConfig Virtual Operator Dashboard Full - 700
Uncategorized