Why IP Address Management Is Important

Whether you’re a small organization or an enterprise, efficient management of IP addresses can be the difference between a functional network and an inaccessible service.

Increasing complexities, growing device numbers, Cloud Computing, IoT and BYOD continue to heighten the importance of managing your IP address space.

Relying on manual record keeping for network connectivity and core business functions can prove risky, even for the most organized of spreadsheets.

What’s Needed for an Efficient IP Address Management Strategy

Accuracy

Accurate IP delegation and record keeping, ensuring no conflict or associated service outages.
opAddress can allocate and track IP addresses dynamically. Search, view and manage address information, ensuring a critical information baseline is established.

Simplicity

An easy to use system that minimizes data entry, making the process for IT teams faster, more efficient, and less tedious
With powerful out-of-the-box capabilities, opAddress requires little or no configuration. Automatically discover the network addressing of production networks and quickly edit or reallocate addresses as needed.

Security

Accurate, up-to-date data to help identify new devices and ensure only those authorized are on your network.
New data is captured and recorded by opAddress every thirty minutes.
Gain full visibility over IP address by device and analyze historical information.

Scalability

Future proofing and capacity planning to accommodate increasing device numbers and network complexities.
opAddress is extensible to grow with your business. Handle complex environments such as multiple tenancies, subdomains and overlapping address spaces with ease.

Uncategorized

5 Mistakes Evaluating NMS You Need To Avoid

So, your boss has just set up a blend of different software products or a SaaS product to take care of the network monitoring. Did your boss really do you a favour or just add to your headache? Has the situation truly improved, or do you just have more unresolved problems?

These are the five most common complaints we hear and solve on day one out of the box.

1. Too Many Alerts.

This is probably the most common problem with monitoring tools. Everything is turned on either out of the box or by the administrator’s choosing and organizations must rely on the logs to get the information they need. There is a fear of missing something but setting up alerts should be a thoughtful process, standardized amongst your team, and carefully chosen. Careful and well-considered Integrations with other tools like email, SMS, and ticketing systems are essential – but you can’t be inserting and sending out junk or it will be ignored.

2. The monitoring tool is indeed the resource hog and has a slow database.

Many popular monitoring tools are built on Microsoft technology using multiple on-premises servers. To scale, it usually takes building a replica of your multiple server setup and additional software licensing costs (Microsoft Server, SQL and the Monitoring Tool) every time you add a server. Then there’s the ongoing operational management of the multiple servers. With so much data constantly processed, the user experience is slow and poor.

3. One size does not fit all / no access to the API.

Many popular tools now are built in the cloud, and you do not own your data. Your data may be rolled up, removed, or you only have access to specific periods of your data. It is no good for longer-term trending or baseline troubleshooting. You need complete API access to your data to integrate it into your business operations.

4. Security.

Supply chain attacks are becoming more frequent. We all know what happened this year with many Telecommunications, Managed Service Providers, Internet Service Providers, the US Federal Government forced to turn off their monitoring tools. While patches were developed to work around the issue, the depth of what the hackers got is still not well understood. I feel for MSPs as their SLAs are destroyed. Hopefully, those force majeure clauses get interpreted favourably.

With an on-premise platform, you have to control it 100%. Complete control ensures that the product works within your security parameters.

5. Automation.

If you have installed many different tools, setting up some automation between them is extremely difficult. Furthermore, the automation breaks when you need to update or reconfigure one or more underlying applications for other reasons (e.g. Security). A SaaS solution may have various actions that they class as automation; however, they lack the flexibility you need for your environment.

Here at Opmantek, we have a strong belief that monitoring tools should be customizable. We believe this helps the overall flexibility, extensibility, scalability and security posture of your organization, ensuring that in the end, you get what you’re really after and that is less downtime!

Solve these five problems and more – > ask us how

Uncategorized

How to Make Cybersecurity Part of your Business Culture

Most businesses and government organisations are now aware that cybersecurity is not merely the responsibility of IT. They recognise that everyone is accountable for protecting systems, people and information from attack. They also know that many attacks occur from within rather than from external parties. So how can they make part of their business culture?

Education is key. An education program should complement and explain robust security policies that detail the assets a business or organisation needs to protect, the threats to those assets and the rules and controls for protecting them.

An effective program makes every worker acutely aware of cyber threats, including emails or text messages designed to trick them into providing personal or financial information; entice them to click links to websites or open attachments containing malware, or deceive them into paying fake invoices that purport to be from a senior executive.

It teaches them how to recognise common threats, the actions they need to take and people they need to inform when targeted and the steps to take if they do fall victim to a malicious individual or software. In addition, the program should teach workers how to recognise and respond to poor – or suspicious – cybersecurity behaviour by a colleague.

Cybersecurity education also needs to extend to a business or government organisation’s senior leadership team, who should also visibly support its objectives and model appropriate behaviours. It should also encourage workers and managers to pass on lessons learned to friends and family to help them avoid being compromised by malicious cyber activities.

Perhaps most importantly, it is not good enough to run a cybersecurity education program once and consider it a box ticked. A business or government organisation should run programs regularly and update them as needed to account for changes in policies and the threat landscape. It should also provide ongoing information and direct people to resources such as the Australian Cyber Security Centre for assistance.

Cybersecurity policies and education programs also need to complement the effective use of proven, regularly updated security products to protect systems, people and information from cyber threats.

For more information, contact us at: sales@firstwavecloud.com

Uncategorized

Modernising security and enabling digital transformation with zero-trust network access.

​Keeping systems and information safe is an increasingly complex, high-stakes activity. Trusting individuals or systems by default may have catastrophic consequences if it leads to malicious parties gaining access to corporate networks or resources. These consequences may include service disruption and loss or theft of sensitive information – and may , in‌ ‌turn, lead to  reputational damage as customers and partners lose trust in an affected organisation. In addition, regulators may impose financial penalties if a breach results from a failure of systems or processes.

Unsurprisingly, businesses and government organisations are turning away from security models that trust individuals or systems by default. As TechTarget notes, these models are ill-equipped to handle data distributed across multiple locations, applications and cloud services. A zero-trust approach requires strict identity and device verification not just to get past the network perimeter, but to access internal systems and resources. By segmenting network components and systems and imposing access requirements for each segment – as well as using risk management analytics to identify suspicious activity – businesses and government organisations can respond effectively to modern security challenges.

Zero-trust network access is continuing to gain traction in business and government – particularly as digital transformation projects take effect. According to a Gartner report, because digital transformation projects “require services APIs, data and processes to be accessible through multiple ecosystems anywhere, anytime, from any device over the internet, [they expand] the surface area for attackers to target.”

Gartner says zero-trust network access “provides adaptive, identity-aware, precision access” and “enables digital ecosystems to work without exposing services directly to the internet.”

The analyst firm predicts that, by 2022, 80% of new digital business applications opened up to ecosystem partners will be accessed through zero-trust network access, while by 2023, 60% of enterprises will phase out most of their remote access virtual private networks in favour of network access.

Is your business or government organisation adopting zero-trust network access? What challenges and opportunities is this approach presenting? Please let us know at sales@firstwavecloud.com.

Uncategorized

Using a Commercial And Open Source Approach To Tackle Network Assurance

Join Keith Sinclair as he joins the Passionate About OSS Podcast and talks about how using open source software is a key building block to running your networks. The podcast is also available on Anchor.fmSpotifyGoogle PodcastsRSSPocket CastsBreakerRadioPublic or streamed below; 

Show Notes

Have you noticed the rise in trust, but also the rise in sophistication in Open Source OSS/BSS in recent years? There are many open-source OSS/BSS tools out there. Some have been built as side-projects by communities that have day jobs, whilst others have many employed developers / contributors. Generally speaking, the latter are able to employ developers because they have a reliable revenue stream to support the wages. Our guest on this episode, Keith Sinclair, has made the leap from side-project to thriving OSS/BSS vendor whilst retaining an open-source model. His product, NMIS, has been around since the 1990s, building on the legendary work of other open-source developers like Tobias Oetiker. NMIS has since become one of the flagship products for his company, Opmantek. Keith and the team have succeeded in creating a commercial construct around their open-source roots, offering product support and value-add products. Keith retraces those steps, from the initial discussion that triggered the creation of NMIS, its evolution whilst he simultaneously worked at organisations like Cisco, Macquarie Bank and Anixter, through to the IP buy-out and formation of Opmantek, where he’s been CTO for over 10 years. He also describes some of the core beliefs that have guided this journey, from open-source itself, to the importance of automation, scalability and refactoring. The whole conversation is underpinned by a clear passion for helping SysAdmins and Network Admins tackle network assurance challenges at service providers and enterprises alike. Having done these roles himself, he has a powerful empathy for what these people face each day and how tools can help improve their consistency and effectiveness. For any further questions you may have, Keith can be found at: https://www.linkedin.com/in/kcsinclair Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

Uncategorized

opEvents Prevents Event Storms During A Snowstorm

I dropped into a quarterly business review that one of the Account Managers was doing with one of our customers last week. I like to do this from time to time to hear it for myself directly from the customer. It helps me understand the customers and gives me an opportunity to discuss our platform post-sale and integration.

This particular customer is a Telecommunications Carrier out of North America that runs a lot of wireless and fibre and is rapidly expanding.

The Head of Network Operations was on the call, and for the purposes of keeping the identity private, let’s call him Joe.

Joe talked about the snowstorms and how it impacts their network and field services team. The way it works is that the NOC team gets an alert, does some diagnosis and decides what process needs to occur. To send field service staff means that the problem is something that cannot be remotely fixed. Sending the team out in bad weather to work on wireless equipment is hard on the people who do the work, but at the end of the day, to quote Joe, “we pride ourselves on great customer service.”

Before this customer had opEvents installed, they would have a high rate of field service calls that would result in no fault found during snowstorms. This means that the field services team were not being sent to fix real field-related problems. For any company with a field services team, you know how important it is to send these guys to real problems.

The impact for the customer was that to clear all the events would take days, with over 50% being false reports.

During recent snowstorms, opEvents would handle the alert and find the source problem. The NOC team then could send the field services team out in the snow to investigate and fix the problems. Joe said that the level of accuracy in the alerts was fantastic and the NOC and Field Services Team rebuilt their trust and had confidence that they were being sent to a real fault.

When Joe studied the impact that the FirstWave Platform brought to the table, opEvents reduced event storms to zero, deduplication was no longer a problem and only pushed real events to the team. Field service calls were reduced and the network was brought back to normal in half the time.

“We had a lot of competitors’ customers switch to us during the snowstorms. The amount of downtime we suffered was minimal as we were right on top of any faults, we knew where they were and their severity and deployed our field services team accurately. It puts us ahead of the market.”

Uncategorized