Multi-tenant, SaaS platforms the way of the future for security

​Delivering security solutions to customers has progressively become more complex and inefficient for service providers. Because these solutions may comprise a range of point products from different vendors, they force the service provider – and customer – to manage multiple relationships and technologies.

Combining the scalability and flexibility of Software as a Service with the economies of scale of a multi-tenant environment can resolve these problems. A multi-tenant, SaaS security platform can strip out complexity and give service providers the ability to offer white-labelled solutions comprising world-class security technologies to customers.

At FirstWave, we provide a multi-tenant, SaaS platform that service providers can use to provide solutions that protect enterprises from cyberattacks across email, web and firewall vectors. Our CyberCision Platform orchestrates and provisions cloud-based SaaS based on virtualised email and web security, as well as firewall products from leading vendors. These include Cisco, Palo Alto Networks, Fortinet and Trend Micro; we aim to add more in future.

The platform – available on Amazon Web Services – can be accessed by service providers with no upfront costs and integration. It incorporates APIs and information feeds that service providers can take into their order management, customer management, ticketing and subscription billing systems. All FirstWave infrastructure, management and security processes are certified to ISO 27001 Information Security Management System Standard and ISO 9001 Quality Management System Standard.

The platform enables users to offer solutions to customers ranging from government agencies, financial institutions and multinationals down to two- or three-person startups from a single instance.

Service providers can provision and activate solutions for customers within minutes and offer them packages of security policies. They can also manage all customers from a single pane of glass, and the customers themselves can have a single pane of glass view of their services and security policies. Our platform is carrier-grade and offers five nines service performance and strong hierarchical and role-based access controls.

Our platform is also compliant with the requirements of the General Data Protection Regulation, which protects the data and privacy of individuals in Europe.

With a world-class, multi-tenant, SaaS platform, service providers are now well positioned to help customers meet current and forthcoming security challenges.

Uncategorized

3-Steps To Increase Your Automated Event Management

Recent advances in Operational Process Automation at Opmantek means that our MSP customers can deliver exceptional value to their clients; exceeding their SLAs whilst becoming incredibly sticky.

Are you facing any of the challenges below?

  • Cost pressures as clients try to drive down prices.
  • Ability to meet your SLAs due to overworked technical teams.
  • Absolute reliance on one or two technicians to keep your clients happy.
  • Challenges in retaining level 3-4 technical resources.
  • Significant burdens in maintaining accreditation.
  • Managing increasingly complex client networks.
  • Retaining skills associated with client legacy networks.

Resolve these challenges with incredibly rapid ROI and amazingly low TCO

Opmantek has long believed that Operational Process Automation is one of the foundational pillars for a successful network management strategy. A key piece to this is ensuring that actions are undertaken in a consistent manner each time, with no variation from what is outlined as the standard protocol.

This will help you to:

  • Simplify the procedure
  • Reduce cost
  • Deliver consistent outcomes with your agreed SLAs

Through the use of “context sensitive event actions”, you may now replicate troubleshooting actions and escalation procedures, dynamically.

Example Use Case

1. Issue with Cisco Interface Identified

Here’s the event log for the entire network. Our event management system automatically parses incidents on your client’s networks into Events.

chevron_arrow_down
context_sensitive_actions_step1
2. Context sensitive action bar initiated

Once a specific event has been identified “Context Sensitive Actions” are displayed against the event either automatically or by guiding your NOC team through the steps to remediate.

chevron_arrow_down
context_sensitive_actions_step2
3. Cisco remediation commands executed

The system automatically creates a ticket in the system, pings the affected nodes and Troubleshoots (TS) the Cisco Interface. Once those actions conclude, the results are displayed on the event itself! The operator may now take further action or simply close out the Ticket.

context_sensitive_actions_step3

All of this can happen without the NOC or your client knowing there was ever an issue. Save time, save money and increase your clients’ satisfaction. If you’re interested in taking advantage of these incredible capabilities, please reach out.

Uncategorized

Auditing Your Network, Without Credentials.

Now that I have your attention, how can we possibly audit a network and find all the juicy details about the devices upon it, without having high level credentials to talk to those devices?

Well, it’s a bit of a mistruth. Or a caveat. Or whatever you want to call it. We definitely can do this, but for devices such as routers, printers and switches you will need a minimal set (read only, minimum access level) of SNMP credentials. Computers can be audited without any credentials being stored in Open-AudIT.

“How can you do that?”, “It won’t work on my network, my network and devices are locked down”. Yes, yes, your network is perfectly secure, I understand. In that case you are the perfect candidate to implement network discovery and auditing in this fashion.

So how do we do this? Well, as mentioned, first source a set of SNMP credentials that allow the minimal level of access. Do not worry about credentials for Windows, Linux or any other computer OS.

Next configure Open-AudIT to match devices based on IP address. Note that if you have devices that frequently change IP, you may need to enable this on a per discovery basis to avoid too many false positive device matches. Note that even this can be negated by using a collector per subnet to run discoveries.

Once you have your minimal SNMP credentials and have created and configured a subnet discovery, run it. Naturally devices without credentials will probably be classed as unclassified or even unknown. That is expected – no credentials, remember.

Next use your management software to deploy the audit scripts to the appropriate operating system for each device. For Linux machines (for example), you can use Puppet, Chef or Ansible to push the audit_linux.sh script. Windows domain users also have the option to deploy and run the script at domain login. Then create a cron job (or scheduled task under Windows) to run the audit script on a schedule of your choosing and submit the results to your Open-AudIT server.

Then you should check for unclassified or unknown devices within Open-AudIT and work through them, determining what it is and remediate as necessary.

As the audit script results are submitted, the unclassified or unknown devices should be matched and decrease in number.

Eventually you should have zero unclassified or unknown devices. You have just discovered and audited your network using only a minimal set of SNMP (read only) credentials. You still have all the data Open-AudIT usually collects, but no central store of credentials!

Obviously this will take a lot more effort than using Open-AudIT as designed, but in those cases where you just cannot store sensitive credentials in a central location, Open-AudIT still has you covered.

Uncategorized

SD-WAN La Nueva Tendencia en Gestión de MSP´s

SD-WAN ha sido tendencia este 2021, en organizaciones que implementan ampliamente la tecnología, principalmente los proveedores de servicios gestionados (MSP’s), ya que permite resolver las limitaciones de ancho de banda que en la actualidad comienzan a ser un problema, cabe mencionar que, esta tecnología mejora la experiencia de los usuarios al utilizar servicios de internet que proporcionan velocidades de descarga superiores a las actuales. “Durante 2020, Doyle Research espera que varios proveedores marginales de SD-WAN salgan del mercado o sean adquiridos”. Por lo que, es de suma importancia que los ejecutivos de TI evalúen cuidadosamente la viabilidad al adquirir una marca de SDWAN, ya que en la actualidad existen varias opciones disponibles y cada una de ellas proporcionara ventajas y desventajas que podría marcar el éxito o fracaso en su operación.

“SDWAN ha aumentado su uso en múltiples plataformas de infraestructura, como lo son (IaaS), Amazon AWS, Microsoft Azure, Google Cloud y Oracle”. La mayoría de los proveedores de SD-WAN aprovechan el punto de acceso local más cercano que permite a los usuarios conectarse a internet con su proveedor de servicios contratado(ISP), esto ayuda a que se pueda transferir mas rápido la información al punto de presencia mas cercano reduciendo significativamente la latencia, lo cual es de suma importancia para negocios como, tiendas de autoservicio, farmacéuticas, restaurantes, agencias automotrices, entre otras.

Ya que SDWAN esta comenzando a utilizarse de manera gradual, es importante que los proveedores de servicios de internet puedan monitorear lo que pasa dentro de dichas nubes,  por lo que deben evaluar la capacidad que tienen para integrarse sin problemas con gestores/administradores de TI líderes en el mercado, incluida la conectividad desde el gestor hacia el CPE, para observar el desempeño de la sucursal, APIs personalizables, optimización del rendimiento, mapas georreferenciados y visibilidad de eventos de extremo a extremo, es aquí donde Opmantek ayuda a potenciar dichas características para el monitoreo de los CPEs de clientes, así mismo hace posible el monitoreo de los dispositivos que viven en la nube, inclusive si no tienen conectividad a través de ICMP. ¿Esta usted interesado en este tema?, de ser así no dude en contactarnos en latam@opmantek.com, en donde tenemos una solución hecha a la medida de sus necesidades.

Uncategorized

¿Por qué Opmantek es la Herramienta Más Segura Para la Gestión de tu Red de TI?

El tema informático de moda en estos días sin duda ha sido el “hackeo” que ha sufrido la empresa proveedora de software Solarwinds en sus productos, y como no va ser tema de conversación si tiene en vilo a muchos de los gigantes tecnológicos de los Estados Unidos que son usuarios de esta marca.

 

Este gran agujero en la ciberseguridad cuyo alcance final aún no es conocido así como tampoco se conoce a los autores, ya ha dejado una gran cantidad de afectados como Cisco, Nvidia, Belkin, VMWare o Intel  (El mismo Microsoft lo es).

En el área gubernamental se menciona a la misma CISA como afectada, el DHS, la NNSA o el departamento de energía.

No se sabe a ciencia cierta que sucedió, ya que hasta el día de hoy se siguen descubriendo nuevos detalles e información al respecto.

 

Expertos en ciberseguridad coinciden en que este ataque fue preparado de una manera minuciosa y cocienzuda hacia los puntos neurálgicos de los sistemas, quedando demostrado lo frágil que son los mecanismos de ciberseguridad y lo inmaduros que estos aún son frentes a las amenazas bien planeadas.

 

Se menciona que la vía de acceso de los “hackers” fue el sistema Orion de Solarwinds -lo troyanizaron-, sustituyeron una actualización con un “malware” firmado con el sello oficial de Solar. Esto sucede cuando los clientes llevan a cabo el “update” para actualizar sus sistemas -lo cual  sería un proceso normal y común entre los usuarios de esta plataforma-.

Esto no lo hizo ningún novato, fue una mente experta que conoce muy bien el teje y maneje de la plataforma afectada.

El “hackeo” fue en cascada y una de las acciones fue confiscar el dominio utilizado para detener el ataque, pero no es la solución definitiva, es solo un paso para tapar esta gran fuga. Todavía tienen que pasar muchas horas de análisis y de enteder como pudo pasar esto y que nadie de Solarwinds se pudiera dar cuenta de lo que pasaba.

 

Muchos se preguntarán cual era el objetivo de este ataque y muchos también creerán que era el ataque a entidades privadas y de gobierno que utilizan este software, pero ¿acaso ¿será que el mismo fue dirigido a algo tan intangible como la confianza y la confiabilidad de la infraestructura crítica del mundo a través de un espionaje que insiste en estar haciendo trabajar horas extras a las centrales de inteligencia.? Al final del día solo podemos especular los motivos, pero lo que podemos hacer como empresas es siempre buscar las alternativas como Opmantek que gracias a su sistema basados ​​en LINUX el cual es bien conocido por ser seguro y menos susceptible de recibir este tipo de ataques y si adicionamos que Opmantek cuenta con un diseño de producto robusto y de clase empresarial, tu organización sería un objetivo difícil para los mismos.

Uncategorized

What is The 95th Percentile, And Why Does It Matter?

What is it?

If you run a network, you’ll be interested in the 95th percentile and what it means for network usage and possible spikes in your network pipe. It is a good number to use for planning network usage.

The 95th percentile is a valuable statistic for measuring data throughput. It represents the maximum traffic generated on an interface and is used to discount transient spikes, making it different from the average.

In general terms, the 95th percentile tells you that 95 per cent of the time your network usage will be below a particular amount. You can use this figure to calculate network billing for metered usage.

What information do you need to collect?

There are three things you’ll need to know to perform a 95th percentile calculation

  1. The percentile number. The 95th percentile basically says that 95 per cent of the time your usage is below this number, and the other 5 per cent of the time it exceeds that number.
  2. Data points. These are the pieces of data you have collected. In the case of network usage, they would be based on network use for a set period, perhaps a day, a week or a month. The data would be collected regularly, and then collated. The more data points you use, the more certain you can be of your final percentile 95th calculation.
  3. Data set size. This is the range of the data point values you have collected over a period of time. Statistically, the greater the size of the data set, the more reliable your calculation will be.

How is it calculated?

Once you have all your data points, it’s fairly easy to calculate the 95th percentile.

Here’s an example that might help to explain it better:

The data points that have been collected for network usage are 3, 2, 5, 1, 4.

The total number of entries K = 5.

To calculate the 95th percentile, multiply the number of entries (K) by 0.95:

0.95 x 5 = 4.75 (let’s call this result N).

Now arrange the data points in ascending order.

The list will now be 1, 2, 3, 4, 5.

By removing all values greater than 4.75 (the highest 5 per cent of the data), you can see that the next highest remaining value is the 95th percentile, which in this case is 4.

This means that you would expect 95 per cent of all data measurements to fall at or below 4.

Why use it?

The reason the 95th percentile standard measure is so useful in measuring data throughput & network usage is because it provides a very accurate picture of how much it costs. By knowing the highest value of your network’s 95th percentile, it’s easy to identify spikes in usage. If you are billing clients for network usage, it’s common to rely on the 95th percentile as a basis for billing purposes.

For example, if you have a monthly billing period, (and you have used data points collected from a monthly usage cycle) the 95th percentile allows a customer to have a short burst in traffic (36 hours or less in this case) without being charged for over-usage. This is known as burstable billing. 

Burstable Billing

Burstable billing is a type of billing method used by cloud computing service providers where customers are charged for a minimum guaranteed level of resources (such as CPU, RAM, or bandwidth) with the option to “burst” beyond that level when needed.

Bursting allows customers to use additional resources beyond their guaranteed minimum without incurring additional charges until a certain limit is reached. This management move can be particularly useful for businesses with fluctuating resource usage patterns.

Additionally, bursting is a cost-effective solution for cloud computing service providers, allowing customers to pay for a minimum level of resources and only pay for additional resources when needed. This approach provides businesses with flexibility and can help them manage their IT budgets more effectively by avoiding unexpected expenses.

Bursting allows customers to easily scale their resources up or down as needed without worrying about cost implications, making it an attractive option for businesses with variable workloads.

Understanding Network Bandwidth Metering

Although 95th percentile bandwidth metering is a critical component of capacity planning, it’s not the only thing to consider. When projecting bandwidth consumption, it’s essential to understand your network’s performance goals, which can be influenced by existing performance, requirements, and budgets. While it may seem straightforward, every network is unique and requires a different approach. However, some fundamental considerations can be useful in most cases.

Here are seven key factors to consider when planning for network bandwidth:

1. Physical network design

When planning for network bandwidth, considering the physical network design is crucial. This involves understanding the physical infrastructure, layout and locations of devices like switches, routers, and servers, which can be visualized with a network map and device inventory. Identifying potential bottlenecks, prioritizing upgrades or replacements, and optimizing device placement can improve network capacity and reduce latency.

2. Logical network design

To plan for network bandwidth, consider both physical and logical network design to optimize traffic flows and identify potential bottlenecks. Analyzing logical data flow through traffic analysis tools helps to identify the most traffic-generating devices, protocols, and applications, and rerouting traffic from overutilized to underutilized links can improve network performance.

3. Current network performance

Bandwidth metering is critical for network capacity planning, and the 95th percentile is an important metric to consider. It helps to identify peak usage by measuring the fair amount of bandwidth used during the highest 5% of traffic intervals. Baseline metrics such as latency, jitter, packet errors, and packet loss are also crucial for projecting future bandwidth utilization and identifying congestion. High latency and jitter can affect real-time applications, while packet errors and packet loss indicate network issues that need to be addressed by upgrading hardware, rerouting traffic flows, or implementing QoS policies.

4. Types of network traffic

Not all network traffic is created equal, and understanding the different types of traffic on your network is essential for effective capacity planning. By analyzing the types of traffic and their respective importance levels, you can prioritize your efforts towards optimizing the network for the most critical traffic. For example, if your network has a lot of VoIP traffic, it is important to ensure low latency and minimal packet loss to avoid call drops or poor call quality. On the other hand, if email traffic dominates your network, ensuring high bandwidth and low latency might be less critical. Therefore, by having a detailed understanding of the types of traffic on your network, you can optimize your resources and efforts towards the most important areas.

5. SLAs and performance requirements

Service Level Agreements (SLAs) are critical for ensuring that your network meets the needs of your end-users or clients. When planning for network capacity, it is essential to quantify SLAs and other performance requirements to ensure that the network can support the necessary levels of performance. By having a clear understanding of these requirements, you can design the network to meet the specific needs of your users and clients.

Even if you don’t have predefined SLAs with external parties, it’s still important to establish minimum requirements to ensure that your organization’s needs are met. These minimum requirements can serve as a baseline for network performance and help ensure that all stakeholders are on the same page when issues arise. For example, you might establish minimum bandwidth and latency requirements to ensure that your network can support critical applications without performance degradation. Overall, establishing clear performance requirements is essential for effective capacity planning and ensuring that your network meets the needs of your organization.

6. Reducing user pains to boost productivity

When planning for network bandwidth, it’s important to consider the experience of the end-users. After all, the network’s purpose is to enable their productivity. Identifying and resolving any major pain points is critical to ensure that users can perform their tasks efficiently. For example, if teleconferences are experiencing regular connectivity issues, this could be a major pain point for users. Identifying the cause of the issue, such as server backup schedules, can help resolve the issue and boost productivity without any hardware changes.

7. Expected growth

It is crucial to consider expected growth or changes in network utilization over time. This estimation should include factors like the addition of new users or devices, changes in network traffic patterns, and new applications or services that will require network resources. By factoring in expected growth, you can ensure that your network will have enough capacity to support future needs and avoid potential bottlenecks or performance issues. It is important to include a cushion for this growth in the planning stages to account for any unexpected changes or surges in network usage.

By considering these factors, you can better plan for your network’s bandwidth needs, enabling you to optimize network performance, productivity, and end-user experience.

Conclusion

The 95th percentile is an important metric for measuring network usage and planning network capacity. It is used to calculate network billing for metered usage and allows for burstable billing, which is useful for businesses with fluctuating resource usage patterns. Understanding network bandwidth metering is crucial for capacity planning, and considering factors such as physical and logical network design, current network performance, and future growth projections can help optimize network performance and reduce latency. The 95th percentile provides an accurate picture of network usage and helps identify spikes in usage, making it a critical component of network capacity planning.

Uncategorized