Auditing Your Network, Without Credentials.

Now that I have your attention, how can we possibly audit a network and find all the juicy details about the devices upon it, without having high level credentials to talk to those devices?

Well, it’s a bit of a mistruth. Or a caveat. Or whatever you want to call it. We definitely can do this, but for devices such as routers, printers and switches you will need a minimal set (read only, minimum access level) of SNMP credentials. Computers can be audited without any credentials being stored in Open-AudIT.

“How can you do that?”, “It won’t work on my network, my network and devices are locked down”. Yes, yes, your network is perfectly secure, I understand. In that case you are the perfect candidate to implement network discovery and auditing in this fashion.

So how do we do this? Well, as mentioned, first source a set of SNMP credentials that allow the minimal level of access. Do not worry about credentials for Windows, Linux or any other computer OS.

Next configure Open-AudIT to match devices based on IP address. Note that if you have devices that frequently change IP, you may need to enable this on a per discovery basis to avoid too many false positive device matches. Note that even this can be negated by using a collector per subnet to run discoveries.

Once you have your minimal SNMP credentials and have created and configured a subnet discovery, run it. Naturally devices without credentials will probably be classed as unclassified or even unknown. That is expected – no credentials, remember.

Next use your management software to deploy the audit scripts to the appropriate operating system for each device. For Linux machines (for example), you can use Puppet, Chef or Ansible to push the audit_linux.sh script. Windows domain users also have the option to deploy and run the script at domain login. Then create a cron job (or scheduled task under Windows) to run the audit script on a schedule of your choosing and submit the results to your Open-AudIT server.

Then you should check for unclassified or unknown devices within Open-AudIT and work through them, determining what it is and remediate as necessary.

As the audit script results are submitted, the unclassified or unknown devices should be matched and decrease in number.

Eventually you should have zero unclassified or unknown devices. You have just discovered and audited your network using only a minimal set of SNMP (read only) credentials. You still have all the data Open-AudIT usually collects, but no central store of credentials!

Obviously this will take a lot more effort than using Open-AudIT as designed, but in those cases where you just cannot store sensitive credentials in a central location, Open-AudIT still has you covered.

Uncategorized

SD-WAN La Nueva Tendencia en Gestión de MSP´s

SD-WAN ha sido tendencia este 2021, en organizaciones que implementan ampliamente la tecnología, principalmente los proveedores de servicios gestionados (MSP’s), ya que permite resolver las limitaciones de ancho de banda que en la actualidad comienzan a ser un problema, cabe mencionar que, esta tecnología mejora la experiencia de los usuarios al utilizar servicios de internet que proporcionan velocidades de descarga superiores a las actuales. “Durante 2020, Doyle Research espera que varios proveedores marginales de SD-WAN salgan del mercado o sean adquiridos”. Por lo que, es de suma importancia que los ejecutivos de TI evalúen cuidadosamente la viabilidad al adquirir una marca de SDWAN, ya que en la actualidad existen varias opciones disponibles y cada una de ellas proporcionara ventajas y desventajas que podría marcar el éxito o fracaso en su operación.

“SDWAN ha aumentado su uso en múltiples plataformas de infraestructura, como lo son (IaaS), Amazon AWS, Microsoft Azure, Google Cloud y Oracle”. La mayoría de los proveedores de SD-WAN aprovechan el punto de acceso local más cercano que permite a los usuarios conectarse a internet con su proveedor de servicios contratado(ISP), esto ayuda a que se pueda transferir mas rápido la información al punto de presencia mas cercano reduciendo significativamente la latencia, lo cual es de suma importancia para negocios como, tiendas de autoservicio, farmacéuticas, restaurantes, agencias automotrices, entre otras.

Ya que SDWAN esta comenzando a utilizarse de manera gradual, es importante que los proveedores de servicios de internet puedan monitorear lo que pasa dentro de dichas nubes,  por lo que deben evaluar la capacidad que tienen para integrarse sin problemas con gestores/administradores de TI líderes en el mercado, incluida la conectividad desde el gestor hacia el CPE, para observar el desempeño de la sucursal, APIs personalizables, optimización del rendimiento, mapas georreferenciados y visibilidad de eventos de extremo a extremo, es aquí donde Opmantek ayuda a potenciar dichas características para el monitoreo de los CPEs de clientes, así mismo hace posible el monitoreo de los dispositivos que viven en la nube, inclusive si no tienen conectividad a través de ICMP. ¿Esta usted interesado en este tema?, de ser así no dude en contactarnos en latam@opmantek.com, en donde tenemos una solución hecha a la medida de sus necesidades.

Uncategorized

¿Por qué Opmantek es la Herramienta Más Segura Para la Gestión de tu Red de TI?

El tema informático de moda en estos días sin duda ha sido el “hackeo” que ha sufrido la empresa proveedora de software Solarwinds en sus productos, y como no va ser tema de conversación si tiene en vilo a muchos de los gigantes tecnológicos de los Estados Unidos que son usuarios de esta marca.

 

Este gran agujero en la ciberseguridad cuyo alcance final aún no es conocido así como tampoco se conoce a los autores, ya ha dejado una gran cantidad de afectados como Cisco, Nvidia, Belkin, VMWare o Intel  (El mismo Microsoft lo es).

En el área gubernamental se menciona a la misma CISA como afectada, el DHS, la NNSA o el departamento de energía.

No se sabe a ciencia cierta que sucediĂł, ya que hasta el dĂ­a de hoy se siguen descubriendo nuevos detalles e informaciĂłn al respecto.

 

Expertos en ciberseguridad coinciden en que este ataque fue preparado de una manera minuciosa y cocienzuda hacia los puntos neurálgicos de los sistemas, quedando demostrado lo frágil que son los mecanismos de ciberseguridad y lo inmaduros que estos aún son frentes a las amenazas bien planeadas.

 

Se menciona que la vía de acceso de los “hackers” fue el sistema Orion de Solarwinds -lo troyanizaron-, sustituyeron una actualización con un “malware” firmado con el sello oficial de Solar. Esto sucede cuando los clientes llevan a cabo el “update” para actualizar sus sistemas -lo cual  sería un proceso normal y común entre los usuarios de esta plataforma-.

Esto no lo hizo ningĂşn novato, fue una mente experta que conoce muy bien el teje y maneje de la plataforma afectada.

El “hackeo” fue en cascada y una de las acciones fue confiscar el dominio utilizado para detener el ataque, pero no es la solución definitiva, es solo un paso para tapar esta gran fuga. Todavía tienen que pasar muchas horas de análisis y de enteder como pudo pasar esto y que nadie de Solarwinds se pudiera dar cuenta de lo que pasaba.

 

Muchos se preguntarán cual era el objetivo de este ataque y muchos también creerán que era el ataque a entidades privadas y de gobierno que utilizan este software, pero ¿acaso ¿será que el mismo fue dirigido a algo tan intangible como la confianza y la confiabilidad de la infraestructura crítica del mundo a través de un espionaje que insiste en estar haciendo trabajar horas extras a las centrales de inteligencia.? Al final del día solo podemos especular los motivos, pero lo que podemos hacer como empresas es siempre buscar las alternativas como Opmantek que gracias a su sistema basados ​​en LINUX el cual es bien conocido por ser seguro y menos susceptible de recibir este tipo de ataques y si adicionamos que Opmantek cuenta con un diseño de producto robusto y de clase empresarial, tu organización sería un objetivo difícil para los mismos.

Uncategorized

What is The 95th Percentile, And Why Does It Matter?

What is it?

If you run a network, you’ll be interested in the 95th percentile and what it means for network usage and possible spikes in your network pipe. It is a good number to use for planning network usage.

The 95th percentile is a valuable statistic for measuring data throughput. It represents the maximum traffic generated on an interface and is used to discount transient spikes, making it different from the average.

In general terms, the 95th percentile tells you that 95 per cent of the time your network usage will be below a particular amount. You can use this figure to calculate network billing for metered usage.

What information do you need to collect?

There are three things you’ll need to know to perform a 95th percentile calculation

  1. The percentile number. The 95th percentile basically says that 95 per cent of the time your usage is below this number, and the other 5 per cent of the time it exceeds that number.
  2. Data points. These are the pieces of data you have collected. In the case of network usage, they would be based on network use for a set period, perhaps a day, a week or a month. The data would be collected regularly, and then collated. The more data points you use, the more certain you can be of your final percentile 95th calculation.
  3. Data set size. This is the range of the data point values you have collected over a period of time. Statistically, the greater the size of the data set, the more reliable your calculation will be.

How is it calculated?

Once you have all your data points, it’s fairly easy to calculate the 95th percentile.

Here’s an example that might help to explain it better:

The data points that have been collected for network usage are 3, 2, 5, 1, 4.

The total number of entries K = 5.

To calculate the 95th percentile, multiply the number of entries (K) by 0.95:

0.95 x 5 = 4.75 (let’s call this result N).

Now arrange the data points in ascending order.

The list will now be 1, 2, 3, 4, 5.

By removing all values greater than 4.75 (the highest 5 per cent of the data), you can see that the next highest remaining value is the 95th percentile, which in this case is 4.

This means that you would expect 95 per cent of all data measurements to fall at or below 4.

Why use it?

The reason the 95th percentile standard measure is so useful in measuring data throughput & network usage is because it provides a very accurate picture of how much it costs. By knowing the highest value of your network’s 95th percentile, it’s easy to identify spikes in usage. If you are billing clients for network usage, it’s common to rely on the 95th percentile as a basis for billing purposes.

For example, if you have a monthly billing period, (and you have used data points collected from a monthly usage cycle) the 95th percentile allows a customer to have a short burst in traffic (36 hours or less in this case) without being charged for over-usage. This is known as burstable billing. 

Burstable Billing

Burstable billing is a type of billing method used by cloud computing service providers where customers are charged for a minimum guaranteed level of resources (such as CPU, RAM, or bandwidth) with the option to “burst” beyond that level when needed.

Bursting allows customers to use additional resources beyond their guaranteed minimum without incurring additional charges until a certain limit is reached. This management move can be particularly useful for businesses with fluctuating resource usage patterns.

Additionally, bursting is a cost-effective solution for cloud computing service providers, allowing customers to pay for a minimum level of resources and only pay for additional resources when needed. This approach provides businesses with flexibility and can help them manage their IT budgets more effectively by avoiding unexpected expenses.

Bursting allows customers to easily scale their resources up or down as needed without worrying about cost implications, making it an attractive option for businesses with variable workloads.

Understanding Network Bandwidth Metering

Although 95th percentile bandwidth metering is a critical component of capacity planning, it’s not the only thing to consider. When projecting bandwidth consumption, it’s essential to understand your network’s performance goals, which can be influenced by existing performance, requirements, and budgets. While it may seem straightforward, every network is unique and requires a different approach. However, some fundamental considerations can be useful in most cases.

Here are seven key factors to consider when planning for network bandwidth:

1. Physical network design

When planning for network bandwidth, considering the physical network design is crucial. This involves understanding the physical infrastructure, layout and locations of devices like switches, routers, and servers, which can be visualized with a network map and device inventory. Identifying potential bottlenecks, prioritizing upgrades or replacements, and optimizing device placement can improve network capacity and reduce latency.

2. Logical network design

To plan for network bandwidth, consider both physical and logical network design to optimize traffic flows and identify potential bottlenecks. Analyzing logical data flow through traffic analysis tools helps to identify the most traffic-generating devices, protocols, and applications, and rerouting traffic from overutilized to underutilized links can improve network performance.

3. Current network performance

Bandwidth metering is critical for network capacity planning, and the 95th percentile is an important metric to consider. It helps to identify peak usage by measuring the fair amount of bandwidth used during the highest 5% of traffic intervals. Baseline metrics such as latency, jitter, packet errors, and packet loss are also crucial for projecting future bandwidth utilization and identifying congestion. High latency and jitter can affect real-time applications, while packet errors and packet loss indicate network issues that need to be addressed by upgrading hardware, rerouting traffic flows, or implementing QoS policies.

4. Types of network traffic

Not all network traffic is created equal, and understanding the different types of traffic on your network is essential for effective capacity planning. By analyzing the types of traffic and their respective importance levels, you can prioritize your efforts towards optimizing the network for the most critical traffic. For example, if your network has a lot of VoIP traffic, it is important to ensure low latency and minimal packet loss to avoid call drops or poor call quality. On the other hand, if email traffic dominates your network, ensuring high bandwidth and low latency might be less critical. Therefore, by having a detailed understanding of the types of traffic on your network, you can optimize your resources and efforts towards the most important areas.

5. SLAs and performance requirements

Service Level Agreements (SLAs) are critical for ensuring that your network meets the needs of your end-users or clients. When planning for network capacity, it is essential to quantify SLAs and other performance requirements to ensure that the network can support the necessary levels of performance. By having a clear understanding of these requirements, you can design the network to meet the specific needs of your users and clients.

Even if you don’t have predefined SLAs with external parties, it’s still important to establish minimum requirements to ensure that your organization’s needs are met. These minimum requirements can serve as a baseline for network performance and help ensure that all stakeholders are on the same page when issues arise. For example, you might establish minimum bandwidth and latency requirements to ensure that your network can support critical applications without performance degradation. Overall, establishing clear performance requirements is essential for effective capacity planning and ensuring that your network meets the needs of your organization.

6. Reducing user pains to boost productivity

When planning for network bandwidth, it’s important to consider the experience of the end-users. After all, the network’s purpose is to enable their productivity. Identifying and resolving any major pain points is critical to ensure that users can perform their tasks efficiently. For example, if teleconferences are experiencing regular connectivity issues, this could be a major pain point for users. Identifying the cause of the issue, such as server backup schedules, can help resolve the issue and boost productivity without any hardware changes.

7. Expected growth

It is crucial to consider expected growth or changes in network utilization over time. This estimation should include factors like the addition of new users or devices, changes in network traffic patterns, and new applications or services that will require network resources. By factoring in expected growth, you can ensure that your network will have enough capacity to support future needs and avoid potential bottlenecks or performance issues. It is important to include a cushion for this growth in the planning stages to account for any unexpected changes or surges in network usage.

By considering these factors, you can better plan for your network’s bandwidth needs, enabling you to optimize network performance, productivity, and end-user experience.

Conclusion

The 95th percentile is an important metric for measuring network usage and planning network capacity. It is used to calculate network billing for metered usage and allows for burstable billing, which is useful for businesses with fluctuating resource usage patterns. Understanding network bandwidth metering is crucial for capacity planning, and considering factors such as physical and logical network design, current network performance, and future growth projections can help optimize network performance and reduce latency. The 95th percentile provides an accurate picture of network usage and helps identify spikes in usage, making it a critical component of network capacity planning.

Uncategorized

How To Thrive In A Post-Covid Era: 10 Predictions For Enterprise Network Infrastructures

An enterprise network serves as the foundation for reliably connecting users, devices and applications, providing access to data across local area networks and the cloud, as well as delivering crucial insight into analytics.But in the wake of a year that was no doubt shaped by COVID-19 and the disruption it brought to the industry, how have enterprise networks been impacted, and what are the requirements moving forward?

What were previously technology nice-to-haves and future infrastructure intentions, are now swiftly becoming business imperatives.

In this blog, we’ll explore our top 10 predictions for network infrastructure in 2021.

1.   Cloud Application Delivery

The traditional office-based-model has no doubt permanently changed and flexible working arrangements brought forward by the pandemic will continue. A Boston Consulting study from last year found that 63% of employees want a hybrid model whereby they continue to work from home part of the time.

Organizations will further turn to the cloud for application delivery, placing an investment in remote connectivity and new security functionality

2.   Businesses Turn to Big Data and Analytics

The requirement for businesses to be agile, change and adapt is more prevalent than ever, and decision-makers need to identify trends and ultimately stay ahead of the curve through outcomes-based strategies.

Big data is becoming an imperative tool in every organization’s arsenal, though its presence is superfluous without the appropriate means to disseminate and analyse it.

We predict this will drive the recruitment of data professionals and further, the simplification in data management through self-service means, accessible to non-data-professionals.

“It’s really about democratizing analytics. It is really about getting insight in a fraction of the time with less skill than is possible today.” – Rita Sallam, vice president and analyst at Gartner.

3.   The Year of Mass Adoption for Cognitive / Artificial Intelligence

With big data, comes big responsibility and moreover – big processing requirements, which is where AI will be heavily recruited.

2021 will be the year of mass adoption for AI, as businesses of all levels have experienced a paradigm shift into a digital-first model. Corporate networks have been tested through remote working arrangements, uncovering major reliability issues and security threats. IT leaders are looking for a set and forget solutions that automatically provide optimization and security, which is where software such as Opmantek’s NMIS, opEvents, opConfig and Open-AudIT can assist.

“Opmantek software is a key system used by IT operations teams across all industries — it acts as the dashboard of a car and tells them how fast everything is going and lets them know when something is faulty. It even predicts future faults, and that’s a big part of the AI. The longer you run our software, the smarter it gets — it learns about your IT Infrastructure and starts to automatically manage it better and deliver better information to the IT operations team.” said Danny Maher, Chairman of Opmantek .

4.   Hybrid Clouds in High Demand

Agility, speed, security, scalability and compliance are all considerations for IT decision-makers.

Though, there’s never a blanket / one size fits all solution for every business use case, and so the demand for hybrid cloud environments will continue to grow. The traditional model of cloud providers is that of a one-stop-shop. However, we predict as demand grows; cloud market leaders will introduce greater interoperability and further allow users to introduce cloud tools across their existing on-campus networks. Collaboration between cloud providers may even be on the cards as users demand greater flexibility.

5.   Networking Virtualization

Network virtualization offers many benefits by automating and simplifying processes, including network configuration flexibility, improved control over-segmentation, speed, increased security and cost savings.

According to research by Spiceworks, 30% of businesses currently use network virtualization technology — and an additional 14% plan to use it within the next 2 years.

6.   Unified Communication And Collaboration Tools Are Here To Stay

End-user adoption is often one of the greatest barriers for IT professionals looking to implement new software. However, seemingly overnight, employees were catapulted into a reality where unified communications as a service (UCaaS) was no longer just an occasional collaboration tool, but rather a necessity of the employment.

We have changed our habits and the way in which we do business. Even as the workforce begin to transition back to office or hybrid office/work from home environments, there’s no doubt that UCaaS is here to stay. Providers will introduce new functionality and continue to diversify their offering to accommodate hybrid working in 2021.

7.   WiFi Gets an Upgrade

Businesses and consumers alike want things faster, easier and more efficient, and WiFi is no exception. Enter WiFi 6e.

6e not only offers new airwaves for routers to use, but it doesn’t require overlapping signals.

One of the major benefits of 6e is a reduction in network congestion, specifically in areas where users are closely spaced.  As the pandemic continues to unfold, rush hour and crowded spaces are less of an issue, so it may be a waiting game as to when in 2021 we realise 6e’s true potential.

8.   IoT (Internet of Things) – More than just Alexa

As digital transformation is on the rise, so is IoT and its use cases. A SecurityToday article forecasted that by 2021 there would be 35 billion IoT devices installed worldwide.

IoT is already revolutionizing the way key industries do business, however, healthcare will double down in 2021. Reduced access to face-to-face medical contact has accelerated the need for remote care, and according to Allied Market research – the global internet of things in the healthcare market is expected to reach $332.672 billion by 2027.

9.   A Focus on Cybersecurity

In light of recent high profile cybersecurity attacks which infiltrated private companies, state and federal organizations by inserting malicious code into trusted software; cybersecurity and secure network monitoring will be paramount.

If you have data or services of value, you need to protect it properly. Keith Sinclair – CTO & Co-founder of Opmantek says, “It is critical to business continuity and data security that you have security controls in your environment to mitigate risk.”

10.    Infrastructure Management Software Leveraged

Application demands are continuing to grow and networks must respond. Network professionals must find means of simplifying these increasingly complex systems and environments. Here’s where automated network management software will be leveraged.

Opmantek Software serves to augment a network engineering or system administration role. As well as emulating actions that network engineers take within a network management system, it can also perform advanced maintenance tasks, assist in the interpretation of network data and communicate effectively with other digital systems in order to categorise, resolve and escalate potential network issues.

For more information about Opmantek and the services we provide, get in touch. Our network engineers are available to chat through specific issues you may be facing within your own network environment.

Uncategorized

Avoid Risk, Don’t Accept Being Hacked And Switch to Opmantek For Your New Network Management Solution

Opmantek, one of the world’s leading providers of Automated Network Management Software, has advised the Network Management Industry to lift their game.It’s evident to any CTO, IT Manager, Head of Network Operations or Network Engineer that anything that has centralized access to the network or contains centralized information is at risk from any actor, foreign or domestic. When your security is breached the cost to your business will be high! At very least, you’ll activate your security plan and work through your checklist and fix the problem or at worst you’re done. You may lose all your customers, and revenue will decline, goodwill lost, reputation tarnished and get sued. In some countries, you may need to answer for privacy breaches. It is all so much worse if you’re a government agency.

 

As Danny Maher, Chairman of Opmantek, said of the Solarwinds Orion Hack, “Imagine being an MSP and having to shut down your business because of this.”

 

If you have data or services of value, you need to protect it properly.

 

The opportunities to harden UNIX and LINUX based systems are well known. These operating systems are secure and harder to attack. Furthermore, backed with a robust network design, secure perimeter, enforced processes and trained staff, you become a difficult target.

 

A lab environment for testing the rollout of any platform, patches, updates or the like is key to understanding nuances, new features, interoperability etc. Confirmation that it does everything the documentation says and nothing more before you deploy should be the standard operating procedure. Only roll out patches and new versions if they offer you something you need, or if it’s recommended by the trusted vendor.

 

Craig Nelson, CEO, says that “Many customers come to us from a Windows environment due to the concerns that they have over the security of their network management platform and how many ways it can be infiltrated. We’re seeing more SaaS customers come to us too for the same reason.”

 

Keith Sinclair says, “The benefits of using Linux is that you have control over everything. It is critical to business continuity and data security that you have security controls in your environment to mitigate risk.”

Book a Demo

Uncategorized