- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Your profile timezone:
Be a part of the community! Join BalticNOG Slack channel today!
The first BalticNOG 2025 will take place at LITEXPO Vilnius, Lithuania, from September 24 to 25, 2025.
BalticNOG 2025 will bring together around 400 network professionals, engineers, and decision-makers from across the Baltic States, Scandinavia, and Central Europe. This is a unique opportunity to connect with peers from industries including telecommunications, data centers, cloud providers, enterprise networking, academia, and cybersecurity.
Join us for two days of insightful discussions, technical workshops, and industry networking aimed at strengthening regional and global Internet infrastructure. Whether you’re an operator, researcher, or policymaker, BalticNOG 2025 is the place to be!
“Networking in the Dark”
Networks have a unique ability to determine who is communicating with whom, and even the topic of the communication. In using the network we are trusting the network not to betray us and leak this information to others. A decade ago it becme apparent that this trust had been betrayed. The reaction from the protocol and application folk in the IETF was to view all forms of network monitoriong as an attack on users’ privacy, and took steps to apply encryption to as much of the protocol space ass they could. A decade later what does this look like? Have the application space succeeded in their objective to withhold all information from the network, and strip networks down to an undistinguished commodity carrier role?
Computer network operators depend on optical transmission everywhere as it is what glues together our interconnected world. But most of the industry is running the same kinds of signals down the optical transceivers.
As part of my need to "Trust, but verify" I wanted to check my assumptions on how the business end of modern optical modules worked, so join me in a adventure of sending weird signals many kilometres, and maybe set some records for the most wasteful bandwidth utilisation of optical spectrum in 2024!
Computer network operators depend on optical stuff everywhere as it is what glues together our interconnected world. But most of the industry is running the same kinds of signals down the optical transceivers.
As part of my need to "Trust, but verify" I wanted to check my assumptions on how the business end of modern optical modules worked, so join me in a adventure of sending weird signals many kilometres, and maybe set some records for the most wasteful bandwidth utilisation of optical spectrum in 2024!
In this talk we will cover the basis of optical networks, how it fits in with networking, some of the weird things pluggable optics do, the perhaps odd industry defacto standards, and bending the intended use cases of existing tech to make signals that would would deeply probably confuse a modest signals intelligence agency
This presentation investigates the proximity to a low Signal-to-Noise Ratio (SNR) threshold that can still maintain a tolerable Bit Error Rate (BER) in 100G / 400G / 800G network links. Additionally, we account for factors such as temperature and cable length to predict the duration for which a reliable network connection can be sustained between transceivers. The analysis, based on data retrieved using a Flexbox, focuses on comparing the reliability of coherent (16QAM) and non-coherent (PAM4) transceivers, with a detailed discussion on the implications of these technologies on network performance.
Made by FLEXOPTIX Research - Gerhard and Thomas.
Discover powerful but lesser-known BGP features in FRRouting that can simplify operations, improve control, and boost performance. Whether you're an operator or developer, you'll walk away with practical tools and ideas you likely haven’t explored yet.
In an era of geopolitical tension, climate disruption, and rapidly increasing bandwidth demands, network resilience is no longer optional—it's critical infrastructure. This session examines how Europe is digitally connected to the world: the submarine cables and terrestrial routes that form its backbone, the hidden vulnerabilities exposed by recent incidents, and the strategic steps needed to secure global connectivity for the future. From the Red Sea cable disruptions to new risks driven by climate change and supply chain constraints, we’ll explore what’s going wrong—and what can be done better. Attendees will gain practical insights and leave with a renewed focus on smarter, more collaborative infrastructure planning.
Session Topics:
How Europe connects to the world
Lessons from recent disruptions (2024–2025)
Emerging risk factors shaping network resilience
Climate change and extreme weather events
Geopolitical instability and regulatory fragmentation
Post-pandemic supply chain pressures and
infrastructure constraints
What operators can do to improve resilience
Rethinking route planning: move beyond redundancy to true diversification
Key questions to assess supplier resilience and route transparency
Exploring terrestrial alternatives and investing in underdeveloped regions
As networks grow in complexity and scale, manual operations are no longer sustainable. Yet, many organizations struggle to go beyond isolated Command Line Interface (CLI), Graphical User Interface (GUI), or siloed tooling. This talk presents The Network Automation Blueprint, a structured, vendor-neutral approach to designing and executing scalable network automation.
We begin by addressing the foundational questions: Why automate your network? and What can you automate? Highlighting real-world examples that demonstrate how even small automation efforts can yield big wins in efficiency, reliability, and agility. From automating routine configurations to streamlining change validation and compliance, the scope of what’s possible is broader than many realize.
However, before automation can be effective, certain pre-requisites must be in place. These include both technical and organizational factors. Technical prerequisites include patternization, standardization, and well-defined processes. Organizational prerequisites include leadership buy-in and support, skill development, and adaptable processes. We explore the critical role of process maturity, showing how defined standards and change control mechanisms act as enablers of safe automation.
A common challenge in network automation efforts is the dilemma between ad-hoc "just do it" automation and intentional, design-driven architecture. This talk contrasts both approaches and provides a balanced view: how to encourage grassroots innovation while converging on a maintainable system.
The blueprint outlines the key areas where network automation can be applied to drive improvement, including config generation and deployment, IP, BGP, VLAN allocations, fault remediation, etc.
Attendees will leave with a clear mental model of how to structure their automation journey to tackle network automation adoption challenges. It offers clear guidance for those just starting out, while providing equally valuable insights for experienced practitioners looking to refine or scale their efforts.
Artificial Intelligence (AI) set new demands on current data center infrastructure as it is undergoing a paradigm shift to be able to cope with the new demands possessed by AI clusters to support various use cases. AI clusters can be built in various sizes based on specific needs and workloads. This presentation will explore the data center network infrastructure evolution around the CLOS IP Fabric with the new requirements posed for building infrastructure for AI data center clusters and various types of workloads that these would host. The basic infrastructure needed for such are highspeed AI data center network, storage, and compute. Building Large Language Models (LLMs) requires centralized data, with demand for large bandwidth, while Inferencing is becoming more commonplace, which is driving a transition to a more distributed computing environment where traffic moves device-to-device and edge-to-core-to-cloud
TBU
In this presentation Pavel will share practical real field aspects of using BGP Flow Spec for DDoS mitigation. I will talk about level of vendor support level and known implementation issues.
What’s new on PeeringDB?
Based on practical experience in providing telecommunications services during prolonged power outages, a solution was developed for comprehensive management of power supply to communication nodes. The proposed device enables real-time monitoring and control of power supply units, supports various types of batteries, generators, and solar panels, and uses adaptive charging algorithms that take into account parameters such as voltage, temperature, internal resistance, and more. The device integrates with cloud environments and CRM platforms, is easy to use and scale, and can be implemented in any critical infrastructure sector.
I would like to tell about how newtork is managed at Vinted:
- configuration as code and automatic tests
- basic network functionality to improve HA and throughput
- service integaration into network (kubernetes, NAT, etc)
- hardware and NOS overview
As networks evolve with the adoption of edge cloud computing, artificial intelligence, and specialized processors such as Data Processing Units (DPUs), IP resource management must keep pace with technological advancements. Traditional approaches to managing IP addresses and network configurations are increasingly inadequate in dynamic, high-performance environments where automation and scalability are key.
In this presentation, Juha Holkkola, Co-Founder and Chief Technologist at FusionLayer will explore how emerging technologies are reshaping IP resource management. The talk will provide insights into:
This session will provide network engineers and decision-makers with actionable strategies to future-proof their IP management frameworks in a rapidly advancing digital ecosystem by leveraging real-world case studies and industry trends.
Speaker Bio:
Juha Holkkola is a recognized expert in network infrastructure automation and IP resource management. As the co-founder of FusionLayer, he has spent over two decades helping enterprises, service providers, and internet operators optimize their network architectures. Juha has spoken at numerous industry events, including RIPE, NANOG, and various regional NOGs, where he shares insights on emerging networking, automation, and cybersecurity trends.
The five Regional Internet Registries (RIRs) provide the critical function of IP address resource delegation and registration. The accuracy of registration data directly impacts Internet operation, management, security, and optimization. In addition, the scarcity of IP addresses has brought into focus conflicts between RIR policy and IP registration ownership and use.
In this presentation we present WHEREIS, a measurement-based approach to geolocate delegated IPv4 and IPv6 prefixes at an RIR-region granularity and systematically study where addresses are used post-allocation and the extent to which registration information is accurate. We define a taxonomy of registration "geo-consistency" that compares a prefix's measured geolocation to the allocating RIR's coverage region as well as the registered organization's location. While in aggregate over 98% of the prefixes we examine are consistent with our geolocation inferences, there is substantial variation across RIRs and we focus on AFRINIC as a case study. IPv6 registrations are no more consistent than IPv4, suggesting that structural, rather than technical, issues play an important role in allocations. We solicit additional information on inconsistent prefixes from network operators, IP leasing providers, and collaborate with three RIRs to obtain validation. We further show that the inconsistencies we discover manifest in three commercial geolocation databases. By improving the transparency around post-allocation prefix use, we hope to improve applications that use IP registration data and inform ongoing discussions over in-region address use and policy.
nmaas is an open-source, multi-tenant cloud platform designed for easy, on-demand deployment of applications across distributed Kubernetes clusters (https://docs.nmaas.eu). Built around an extensible catalogue of applications, nmaas allows users to deploy software tools with minimal effort, supporting network operations through process automation.
Originally focused on network management applications, nmaas now serves a broad range of real use cases across academic, research and network operator communities. Users can deploy and customize application instances via a self-service web portal, either on the GÉANT-managed instance or a self-hosted nmaas installation.
nmaas for Virtual Network Operations Center (vNOC) is meant for teams tasked with network or service monitoring responsibility that have limited resources to develop and/or maintain their own NMS or plan to migrate their existing monitoring tools to the cloud. With nmaas, open-source software tools are deployed on a cloud infrastructure, potentially distributed across multiple locations. Using a multi-tenant approach and software-based VPNs, each team has private access to their isolated network management toolset without the burden of underlying IT infrastructure maintenance. Using a single nmaas instance users can deploy applications across multiple remote Kubernetes clusters. This architecture is particularly valuable for managing distributed setups, such as edge nodes or regional clusters. All of the application data never leaves the environment where the application has been deployed, thus ensuring complete data sovereignty. nmaas’ architecture also allows it to take advantage of hardware accelerators such as GPUs, thus enabling the deployment of AI-driven network analysis and monitoring tools.
The nmaas platform is actively used for infrastructure management and monitoring as part of multiple research projects, including production environments in Poland. This presentation will offer an in-depth look at nmaas, highlighting key features and use cases, and will include a demo of core functionalities such as GitOps-based configuration of representative applications.
Do you have probes installed to monitor your network border? Ever struggled to estimate the fallout-perimeter of a probe alerting quality-deterioration? We show an example how to firmly map probes to interconnect points and secondly, how to identify which other traffic is likely affected by the same problem.
This presentation covers all of the free tools and services Team Cymru offers. I have a few more slides to add/update as new features are coming online but this covers most of everything.
I would need no more than 30 minutes, but
Piracy Shield from an operator PoV
Modern network infrastructure management faces a critical challenge: up to 95% of network changes are performed manually, resulting in operational costs 2-3 times higher than network hardware costs. This presentation demonstrates how automated asset discovery platforms eliminate costly human errors while creating seamless integration workflows between monitoring, security, and management systems.
The Human Error Crisis: Manual device configuration accounts for the majority of unplanned outages, with router misconfigurations being the primary cause. Organizations without automation face severe consequences: security misconfigurations lead to data breaches, financial losses, operational disruptions, and regulatory penalties. A single misconfigured firewall rule or outdated system can expose entire networks to sophisticated attacks.
Automated Discovery Architecture: This session showcases passive traffic monitoring through NetFlow, sFlow, and IPFIX protocols for real-time asset identification without network overhead. Unlike active scanning approaches requiring infrastructure modifications, passive monitoring provides comprehensive visibility across hybrid environments including on-premises, cloud, and distributed networks.
Intelligent System Integration: Live demonstrations illustrate automated workflows connecting Asset Discovery platforms with monitoring systems like Zabbix, Configuration Management Databases (CMDB), and Security Information Event Management (SIEM) solutions. Automated CMDBs can scan networks in near-real-time, providing comprehensive, accurate views of network devices and their dependencies.
Practical Implementation Examples: Zero-touch device onboarding, automated security policy enforcement, intelligent change management with impact analysis, and cloud-hybrid visibility chains. These scenarios demonstrate how automation reduces mean time to detect threats from 34 days to real-time detection, while ensuring continuous compliance monitoring.
Measurable Business Impact: Organizations implementing automated asset discovery report dramatic improvements: incident response times reduced from hours to minutes, configuration errors eliminated by 90%, and security policy violations detected instantly rather than weeks later. The presentation concludes with actionable implementation strategies for immediate deployment in production environments.
The network automation community, for the most part, agrees that to automate the network and higher-level services, we need structured data, ideally stored in a centralized location, often referred to as the Source of Truth (SoT).
We have seen increased advocacy for a better way of governing the data in and their relationships using a design-driven approach. The goal is to move away from managing individual data objects in the SoT towards grouping related data together as a single abstracted entity used for configuring or verifying the network state.
To help explain this approach, we want to present an implementation of the idea that provides a full lifecycle of the design-driven approach. The idea is simple: to offer the network engineers (or consumers, in general) an entry point with minimal input data that is expanded according to the design. The design implementation can then be deployed to the network, updated with different input data, adjusted for the new design revision, and, finally, decommissioned. All of this takes into account data interdependencies (i.e., you can’t change the data that a design owns, or decommission a design that another design depends on).
Trust is essential for successful network automation adoption.
When automation platforms exhibit predictable behaviors and transparent processes, teams can confidently delegate critical network operations. Building trustworthy automation doesn't happen by itself, it needs to be baked into the design of every workflows. This technical session examines core principles that build trust, including idempotency, declarative workflows, and robust version control. Using practical examples from production environments, we'll analyze how specific technical decisions affect automation reliability and team confidence. The presentation covers key implementation patterns like state verification, diff-based changes, and failure handling. Attendees will learn concrete approaches for building automation platforms that network teams can trust and rely on daily.
This presentation outlines the essential pillars for successfully launching and scaling an Internet Service Provider (ISP). We begin by defining the core technical infrastructure—network architecture, routing protocols, bandwidth provisioning, peering arrangements, and redundancy strategies—to ensure high availability and optimal performance. Next, we address security imperatives and compliance with data protection regulations to safeguard both network integrity and customer privacy. We then examine financial considerations, from capital expenditures, switching and routing equipment to operational costs such as power, cooling, and staffing. Regulatory compliance is also touched including interconnection agreements, and evolving net neutrality frameworks. Monitoring and management practices are highlighted, detailing real‑time network telemetry, quality‑of‑service tracking, automated alerting, and capacity planning to preempt congestion and outages. Additional focus areas include customer support and service‑level agreements (SLAs)—as well as partnerships with content delivery networks (CDNs), cloud providers. Finally, we try balancing quick wins with long‑term scalability, to guide entrepreneurs and technical leaders in building a resilient, competitive, and compliant ISP.
The main challenges ISPs in Ukraine face every day and what to do.
Recent geopolitical developments have led to severe disruption for crucial systems that society relies on, even for countries that are not part of any ongoing conflict. This includes intentional sabotage of infrastructure or denial of access to critical support systems. Global Navigation Satellite System is one of those systems that society depends on, both in terms of positioning but also for time and frequency . Time and frequency from a GNSS receiver are widely utilized, as receivers are inexpensive and easy to install. A number of the systems that are crucial for society are GNSS dependent in terms of correct and traceable time and frequency. These systems include those critical for the function of energy, communication, security systems as well as health systems for entire countries.
National time is based on the national time scale of each country and can be distributed via various different routes to end users. The main issue with time and frequency is that while it is a key part of critical infrastructure, with society dependent on its correct functioning, the market is not willing to cover the current costs for delivering correct traceable time. Another issue is that the publicly funded National Metrology Institutes (NMIs) that realize the national timescales have limited resources, which results in end users not having access to correct traceable time and frequency, and instead relying on GNSS without redundancy.
To meet current and future needs, research and development must be carried out regarding TF distribution in existing fibre infrastructure, as well as coexistence with communication methods.
Here we present the latest work on traceable national distribution of time and frequency from the Swedish national time scale. The distribution takes place from a number of different locations in Sweden, each location having redundant time scales based on atomic clocks.
Segment Routing has replaced legacy MPLS control plane (LDP/RSVP) in ISP networks. Traffic Engineering in SR is achieved by headend router pushing a set of segments (MPLS labels or IPv6 extension headers) on the packet that instruct the network how to steer this packet.
The talk is about algorithms used to generate the segment list, I want to share my experience gained during developing an SR-TE controller.
SD-WAN network designs - one size fits all
SD-WAN in MPLS networks - yes or no ?
AI for SD-WAN deployment and maintenance - short demo with AI agent helping with NOC opperations for SD-WAN network troubleshooting and deployment
The presentation will also include a 5 min Demo
Designing reliable, adaptable network services at scale demands more than just automation scripts. It can be achieved with orchestration systems that are modular by design, aligned with industry standards, and capable of evolving across domains. This principle shaped the development of a production-grade orchestration stack for network service provisioning, delivered for the Polish NREN, PIONIER, under the GÉANT GN5-2 project. Built on TM Forum compliant models, dynamic inventory, and workflow automation, the system integrates seamlessly from service design to device configuration, thus demonstrating how structured orchestration can move from architecture to deployment with clarity and control.
The experience has shown that as network orchestration becomes increasingly complex, traditional coding approaches struggle to keep pace with the demand for rapid, yet standards-aligned service design. Thus, our aims extended to exploring the use of Large Language Models (LLMs) as active copilots in the architecture, design, and implementation of full-stack orchestration solutions. Rooted in the idea of vibe coding, we guide the LLM to code every piece such as workflow task, operators for the Source of Truth, as well as schema-compliant payloads and retry-aware logic.
The talk shares insights from both phases: the architectural choices behind the working system, and the experimental use of AI to reproduce or extend parts of it. We discuss practical prompt strategies for generating modular logic, enforcing standards such as the use of TM Forum Open APIs, and guiding newcomers through complex orchestration patterns.
Findings show that while human careful guiding remains critical, prompt-driven workflows can reduce time-to-design and deploy, support maintainability, and offer a viable onboarding path for teams new to network automation. Combined with open APIs and reusable building blocks, this approach enables orchestration that is not only functional, but also teachable, transferable, and ready for broader adoption.
RFC5549/RFC8950 specifies a way of announcing IPv4 routes using IPv6 next-hops, allowing networks and IXPs to remove IPv4 link addressing. There is a working group at Euro-IX that aims to test interoperability of RFC8950 implementations and lay out the basics of its usage in an IXP environment. There are also a handful of IXPs already testing RFC8950 route servers.
Presentation Abstract:
Despite the growing adoption of Internet Exchange Points (IXPs) across the globe, many stakeholders—including network engineers, enterprise IT teams, and even decision-makers—often harbor misconceptions about the actual role of an IXP within the internet ecosystem. One of the most common misunderstandings is equating IXPs with IP transit providers.
This presentation provides a comprehensive clarification of the fundamental purpose of IXPs. It will elaborate on how IXPs function as neutral Layer 2 platforms that facilitate direct interconnection between Autonomous Systems (AS), enabling efficient traffic exchange through bilateral or multilateral peering relationships. The presentation will emphasize the architectural distinctions between IXPs and transit services, the requirements for participating in an IXP, and the operational implications of peering.
By addressing prevalent myths and illustrating practical use cases—including traffic optimization and cost reduction—this session aims to reinforce best practices in network interconnection. Attendees will leave with a clearer understanding of how to leverage IXPs effectively to enhance network performance, resilience, and scalability, particularly within local and regional contexts.