Zinkworks Participation at O-RAN ALLIANCE Global PlugFest Fall 2024

Zinkworks participated in the recent O-RAN ALLIANCE Global PlugFest Fall 2024, where we collaborated with Ericsson to demonstrate the integration of their technologies in line with O-RAN Alliance standards. 

The O-RAN Global PlugFests are periodic events organized and co-sponsored by the O-RAN ALLIANCE to enable efficient progress of the O-RAN ecosystem through well-organized testing and integration. PlugFest participants gain early experience in many areas relevant for developing, deploying, and operating O-RAN based solutions.  

The use case featured is a “PowerCell Management rApp”; helping Mobile Operators to manage power savings policies.

See the overview of the Zinkworks' Power Saving rApp Components: 

  • PowerCell UI: This user interface displays radio sites and cells, providing a comprehensive view of power consumption. It enables users to configure power-saving policies at each cell level and visualizes the power-saving actions and their impact on power consumption. 
  • Data Exposure Service: Provides REST API for the UI, facilitating seamless interaction and data exchange. 
  • PowerCell Manager: Determines if a carrier needs to be enabled or disabled on a specific cell based on traffic predictions. It orchestrates the complete workflow, including data collection, processing, and predictions. 
  • EIC Connector: Responsible for all interactions with the Ericsson SMO/Non-RT RIC Platform, ensuring smooth communication and integration. 
  • NTP Service: The Network Traffic Prediction service predicts network traffic (both downlink and uplink traffic volume), aiding in efficient power management. 
  • Scheduler: Triggers REST endpoints to collect metrics and perform predictions, ensuring timely and accurate data processing.

You can watch our Power Saving rApp demo here.

During the event, Zinkworks and Ericsson jointly performed an integration to verify the R1 Services using the O-RAN Energy Saving use case, as defined in clause 4.8.1.3.1 of the O-RAN WG2 Use Case and Requirements. Ericsson developed a Non-RT RIC framework known as the Ericsson Intelligent Controller (EIC), which is fully aligned with O-RAN Alliance standards. This framework serves as a robust foundation for integrating various rApps, including Zinkworks' Power Saving rApp, which is designed according to the O-RAN use case specification (WG2) for Energy Saving requirements. 

Looking ahead, plans are in place to expand testing to cover additional R1 APIs, further enhancing the capabilities and interoperability of the solutions.

If you would like to learn more about Zinkworks PowerSave rApp, please contact our team, sales@zinkworks.com.


Zinkworks Automates Network Management for a Tier 1 UK Communication Service Provider with Google Cloud

In an era where rapid expansion strains operational capacities, how can leading Communication Service Providers (CSP) maintain efficiency while scaling up?

Zinkworks, in collaboration with Google Cloud, is excited to announce an agreement with a Tier 1 UK CSP to transform their Operational Support Systems (OSS) into a cloud-native environment. This initiative will implement autonomous network management using advanced AI and intent-based automation technologies, aiming to reduce network management costs by up to 70%.

 

Current Challenges in Network Management  

The CSP is embarking on a footprint expansion program to increase service coverage and drive growth. However, using current operational processes, this subscriber growth would significantly increase operational costs due to:

  • A higher volume of tickets handled by operations.
  • An increased number of field visits required by engineers.

These factors could lead to a fourfold increase in support staff over five years, making the current approach unsustainable.

Key challenges in managing an optical network include:

  • Extensive Manual Work: Numerous manual interventions extend cycle times for planning, building, testing, and running the network, increasing the likelihood of errors.
  • Siloed Data: Fragmented data storage leads to consistency issues and hinders timely insights crucial for effective management.
  • Lack of Proactive Monitoring: The absence of comprehensive monitoring and troubleshooting capabilities impedes effective network management.
  • Inconsistent Configurations: Ad hoc CLI changes result in inconsistent and erroneous configurations.
  • Limited Predictive Capabilities: Difficulty in foreseeing potential outages and prescribing solutions increases downtime and disruption.
  • Absence of Standardized Processes: Lack of standardization adds complexity, cost, and time to network management.

The CSP sought a solution to overcome these constraints, aiming to unlock subscriber growth while delivering state-of-the-art automated support.

 

Zinkworks' Vision for an Autonomous Network

To address these challenges, Zinkworks and Google Cloud are working together to enable CSPs to autonomously plan, build, test, and run its optical broadband network. This marks a shift from manual efforts to data-driven decision-making, leveraging AI-driven automation and intent-based network models.

Key components of the solution include:

  • Unified Data Lake: Establishing a single data repository in Google Cloud to enable efficient AI use cases through the Vertex AI platform and BigQuery ML, breaking down data silos and fostering seamless data flow.
  • Cloud-Native OSS Transformation: Transitioning OSS systems to Google Cloud, consolidating multiple systems into a streamlined, agile environment.
  • Rapid Innovation: Harnessing AI models to deliver new network use cases in weeks, such as predictive issue detection and enhanced field force efficiency with Generative AI automation.
  • Autonomous Network Creation: Implementing an intent-based automation architecture capable of executing network changes automatically, leading to self-healing networks that resolve issues without human intervention.

 

Benefits of an Autonomous Network

This cloud-native platform delivers substantial benefits:

  • Reduced Lead Time and Manual Effort: Streamlined processes minimize manual interventions and cut down lead times.
  • Real-Time Insights: A unified data fabric provides instant access to insights, enhancing decision-making and operational efficiency.
  • Enhanced Visibility and Control: Comprehensive monitoring with a live network and service topology view enables proactive management.
  • Consistency and Traceability: Centralized configuration management ensures consistent and reliable network environments.
  • Predictive Analytics and Automation: AI/ML-driven analytics minimize the impact of outages, ensuring higher uptime and better service quality.
  • Security: Google Cloud's compliance with security standards ensures data encryption at rest and in transit, maintaining a secure environment.
  • Cost and Complexity Reduction: Standardized, data-driven processes reduce costs, complexity, and time, enhancing efficiency and scalability.
  • Fully Automated Operations: The shift to automated and orchestrated operations prepares the network to handle future demands with agility.

These benefits lead to significant operational efficiencies, enabling aggressive subscriber growth without being constrained by operational limitations.

 

The Future of Network Management

This collaboration marks a significant step towards revolutionizing network management. By leveraging a combination of Zinkworks' expertise in OSS transformation and Google Cloud's capabilities, the CSP can overcome current challenges to create a more efficient, scalable, and automated network. This transformative journey positions the network to meet future demands with ease and agility.

For more information or to discuss how this transformation can benefit your organization, please contact sales@zinkworks.com.


Zinkworks rApp Series: Revolutionising 5G Core Networks with Network Traffic Prediction rApp

Introduction 

In the rapidly evolving world of 5G technology, where network demands are constantly increasing, accurate prediction of network traffic has become a critical element for efficient network management, especially in O-RAN. To address this challenge, Zinkworks has introduced a solution that aims to revolutionise network traffic prediction and management within the 5G core network. This blog showcases the significance of Network Traffic Prediction (NTP) and its profound implications for the telecommunications industry.
 

Use Case Overview 

Network traffic is the primary carrier of data transmission within modern telecommunications networks. It comprises of packets that carry most of the network's load. With the introduction of 5G technology, there is a growing need for intelligent prediction of cellular traffic loads. Being able to predict the number of packets per second (PPS) or bytes per second is crucial for optimising network operations, especially within the 5G core network.
 

Solution Aim 

Our main goal is to implement a machine learning model that is both scalable and centralised and hosted on the RAN Intelligent Controller (RIC). This model will enable us to conduct proactive traffic analysis and load prediction for thousands of connected Open Radio Units (O-RUs) within the cluster. Our key objectives include scalability, centrality, adherence to MLOps principles, and meeting time-constrained inference processes. 

 

Methodology Overview 

The NTP model is a cutting-edge solution that leverages a sophisticated methodology to encapsulate graph transformation and Deep-Learning ML models. It adeptly captures temporal, spatiotemporal, and dynamic correlations between network elements, providing businesses with unparalleled insights into complex network structures. With the NTP model, Network orchestrator can gain a competitive edge and confidently make data-driven decisions, all while maintaining the utmost security and confidentiality. 

 

Modules Description 

  1. Graph Transformation

The initial phase involves transforming the network into a graph representation. Site coordinates morph into graph nodes, while edges derive from site profiles and azimuths of local antennas. Historical traffic loads and additional parameters are encoded as node characteristics, with edge weights mirroring geographical distances and handover occurrences. The resulting adjacency matrix optimally normalises features for computation. 

  1. Model Designing and Training

NTP integrates a robust deep learning model with automated optimisation processes to improve accuracy. This includes fine-tuning layer size adjustments, selecting activation functions, and optimising learning rates. The resultant Docker version is seamlessly deployable within the cluster and integrates effortlessly with other services via REST API. 

 

The Results 

Our cutting-edge solution is revolutionising traffic prediction with its adaptive accuracy, providing users with the ability to forecast traffic load with an unparalleled level of precision. The model utilises a threshold set by the user to predict traffic, and our initial assessments show that it can achieve up to 80% accuracy for predicting traffic load one day ahead. With shorter prediction windows, the accuracy potential soars to 92.5%, forecasting just minutes ahead of time. Our solution is the perfect tool for any 5G orchestrator requiring accurate traffic prediction, giving network operators the confidence to make informed decisions based on real-time data. 

 

Conclusion 

Introducing Zinkworks' NTP framework - a network traffic prediction for 5G core networks. With cutting-edge machine learning techniques at its core, NTP offers unparalleled scalability, centrality, and real-time insights vital for optimising network operations in the 5G era. NTP's transformative impact on the telecommunications industry is undeniable, ushering in a new era of efficiency and reliability. With NTP, telecommunication service providers can experience a paradigm shift in network traffic prediction, resulting in a superior user experience, reduced downtime, and improved overall network performance.  

If you would like to learn more about NTP please contact our team, marketing@zinkworks.com or arrange to speak with us at MWC, click here.


Demystifying Deployment of Nephio Workload Clusters on Multi-Cloud

Introduction

Welcome back to our technical deep-dive series on Nephio! Following our previous discussion on the capabilities of Nephio, this blog post focuses on how we can extend Nephio to deploy workload clusters on multi-cloud. We will discuss in detail about creating and deploying a new workload cluster kpt package for Azure AKS, using cluster API Kubernetes operator.

Nephio Tool Chain 

Nephio's fundamental elements comprise Nephio controllers and a collection of open-source tools. The primary tools integrated into the framework consist of kpt, porch, gitea, and configsync. Therefore, prior to delving into the deployment of the workload cluster package, we will go through specific details of these core toolchains and their key functionalities.

Diagram 1: Nephio Deployment

 

1 - Kpt

Kpt is a tool that simplifies the way we manage and customize Kubernetes configurations. It treats these configurations as data, which means they're the master plan that defines how things should work in a Kubernetes cluster. kpt organizes these configurations into packages, which are like folders containing all the instructions Kubernetes needs, written in YAML files.

A kpt package is identified by a special file called a Kptfile which holds information about the package, kind of like a table of contents. Just as you can have folders within folders on your computer, kpt packages can contain subpackages, allowing for complex but organized configurations.

Nephio relies on kpt for these capabilities:

  • Automation: kpt brings the automation needed for managing configurations at scale.
  • Customization: It allows for the necessary customization of configurations, which is a common requirement for real-world deployments. This is also possible with helm like tools, but they use templates and sometimes over parameterization.
  • Version Control: kpt uses Git for version control, providing a familiar workflow for managing changes.
  • Interoperability: With its function-based approach, kpt ensures that different tools and processes can work together smoothly.

 

2 - Porch

Porch is one of the key components within the Nephio project aimed at simplifying the management of Kubernetes configurations. Its approach to package orchestration underscores Nephio’s intent-driven automation by ensuring that the high-level objectives of network deployments are translated into effective, actionable configurations. It facilitates the creation, tracking, and updating of KRM files as KPT packages. These packages contain configuration data that define how software should be run in a Kubernetes cluster. It also provides a UI experience of WYSIWYG (what you see is what you get) and is the main UI for Nephio in R1 release.

In essence, Porch is designed to handle these major tasks:

  • Package Versioning: It keeps track of different versions of configuration packages, allowing for easy updates and rollbacks.
  • Repository Management: Porch helps manage repositories where these packages are stored, making it easier to organize and access the configurations needed for deploying network services.
  • Package Lifecycle Management: From creating new configurations to proposing changes and publishing final versions, Porch automates and streamlines the entire process.
  • Deployment Readiness: It ensures that once configurations are deemed ready, they can be deployed to the actual Kubernetes environments whether they're in the cloud or at the network edge.

We may need clarification now as to why we need both porch and kpt, so in simple terms kpt is the client-side tool while porch is a server-side tool to handle packages and its lifecycle.

 

3 - Configsync

Nephio R1 relies on configsync to implement Gitops capability, but this can be easily replaced with any other Gitops tool. It helps the KRM resources on the cluster always be in sync with the kpt package revision in Git repository. Porch manages the lifecycle of the packages, but it is configsync which applies and actuates the Kubernetes resources. Later in the section, we will detail how argocd can be used as a Gitops tool to sync packages to cluster.

 

4 - Gitea

Gitea is the primary git tool that comes with R1 release of Nephio. This is where the repositories are created, which will be registered and managed through porch. There are two types of repositories: a blueprint repository which holds model packages and a deployment repository, which contains package instances. This tool also can be replaced with any other git repository like github.

 

Deploying Workload Clusters on Azure

Nephio’s primary capabilities includes its ability to provision and configure multi-vendor cloud infrastructure as workload clusters. While it natively supports kind and GCP as the designated cluster platforms, it is adaptable to other cloud providers through the installation of the respective custom resource and the creation of the necessary KPT packages. The upcoming sections will guide you through the process of creating the Nephio blueprint upstream package and provisioning of workload cluster downstream package on Azure.

Pre-requisites:

  • An active Azure account.
  • An Existing Kubernetes clusters.
  • Access to a GitHub account for repository management.

In this demo we have used the Nephio’s install script to install Nephio on google cloud VM. This installs a Kubernetes cluster as well as Nephio core components.  If you do not have access to GCP, you can convert any Kubernetes cluster into Nephio system by installing these components separately as shown in the diagram below.

 

Installing Nephio Components (Optional)

Follow the below steps to install Nephio components. This will convert any Kubernetes clusters to a Nephio management cluster.

 

Install Cluster CAPI Azure Provider

To allow Nephio management cluster to provision a workload cluster on Azure we need to install the Cluster API Azure provider operator.  Below is a snippet of an Azure installation.

 

We can generate the Azure aks capi cluster configuration using the clusterctl generate cluster command for aks and then modify the template to suit our site-specific requirements.  The generated configuration will have definitions for all Azure custom resources, which are Azure manage control plane, managed machine pool, capi cluster.  These are used by the provider to create the AKS cluster.

 

KPT Package Structure

We want to maintain blueprints which are common for all Azure capi clusters, into a single folder.  We create an upstream package “cluster-capi-aks" which will contain yaml configurations to create Azure cluster and Kptfile with setters that needs to be updated when the final package is created.  nephio-workload-cluster-aks package is the downstream kpt package that will be cloned and deployed as a workload cluster.

 

High-level overview of the files in kpt package.

 

PackageVariant Configuration

The PackageVariant lets you automate the creation and lifecycle management of a specific configuration variant derived from a source package or upstream (cluster-capi-aks in the blueprints-infra-aks repo). Key aspects include:

  • Upstream & Downstream: Specifies the source (upstream) of the configuration and where the modified configuration (downstream) will be stored.
  • Injectors: Uses the ConfigMap named azure-context to inject specific Azure configuration details into the package.
  • Pipeline Mutators: Defines a sequence of KRM functions (set-annotations, apply-replacements, apply-setters) for transforming the package. These functions modify the package based on the provided configuration data.

 

Azure Context (ConfigMap)

The Azure-context ConfigMap includes essential Azure-related configuration data like subscription ID, client ID, tenant ID, and more. Marked as required for config injection (kpt.dev/config-injection), this data is crucial for tailoring the package to a specific Azure environment.

 

Setters (ConfigMap)

This setters ConfigMap holds key-value pairs for various configuration settings, allowing for dynamic modification of package resources. This includes subscription ID, client ID, tenant ID and all other environment variables that are unique to the deployment and need to be replaced inside the Azure cluster configuration using the setters function.

 

Apply Replacements

The ApplyReplacements configuration specifies how to propagate certain values throughout the package. It ensures consistency across various components by dynamically updating fields based on the values defined in azure-context and setters ConfigMap.

 

Kptfile

The Kptfile is central to managing the package lifecycle with kpt. It describes the package (nephio-workload-cluster-aks) and defines the pipeline of functions to be applied to the package, ensuring the desired transformations are executed.

 

The user-defined values are configured in the Azure context file which will hold data, as we configure the azure-context.yaml within git, the “annotation config.kubernetes.io/local-config is set to false”.

 

Registering Blueprint repo in Porch

To integrate the above blueprint repository with Porch for cloning and deployment, apply the following YAML to create the repository and the associated secret:

 

Once the repo is registered, you should see all the revisions and tags available as remote packages.

 

Create Package Revisions

We need to create package revisions to allow configsync to execute PackageVariant definitions on the management server.

The first step is to clone the nephio-workload-cluster-aks package to the target repository, then propose and approve.  It is not required to modify anything locally as all rendering will be done by package variant based on the pipeline setup.

The usual flow of a package is depicted below.

 

The first time when you register the repo, only the main branch and main revision will be visible.  When creating package revisions for the first time use the clone command, and for any subsequent revision, use the copy command.

 

In this demo, our target repository is “mgmt”, and we want the package to be executed on the mgmt cluster. Once the package is approved in porch, the configsync, which has 1-1 mapping with mgmt repository, will execute the packages as a new revision is made available.

This package revision will clone the upstream package and perform kpt fn rendering and create azure workload cluster.

 

We can use PackageVariantSet to create multiple clusters at the same time based on selectors if required. It has a similar template to PackageVariant actuating upstream packages.

It is important to mention that there could be additional NF related CRDs and network configurations required on the workload cluster to deploy free5gcore and OAI RAN, which is not covered in this blog.

 

Gitops with argoCD in Nephio

We can also use argocd as a Gitops tool in Nephio to sync deployment repositories to workload clusters. But it is important to note that argocd doesn’t support full features of kpt packages like functions.
Follow these steps to install and configure argocd on Kubernetes clusters.

 

There are a few steps involved in creating a sync between the workload repo and the newly created workload cluster.

  1. Get the cluster kubeconfig. In this demo, we created an Azure cluster, so we used azurecli to get kubeconfig saved.
  2. Add the clusters to argocd
  3. Add the repo to argocd. For demo we placed a simple nginx deployment to the repo.
  4. Create application which defines the source repo and destination kubernetes cluster. There is a more detailed configuration with regards to how you want to perform the sync. In this demo we create a simple config without the autosync option enabled.

 

Once the application is created. Click the sync and perform the sync. We can enable the autosync either via declarative configuration or through UI.


Nephio: A Game-Changer in 5G Network Automation

Deploying and managing 5G Network Functions and edge applications on a massive scale across multiple cloud vendors and edge clusters has been a daunting challenge for many of our customers. This process involves extensive planning, spanning months, to identify and provision the necessary infrastructure, network functions, and their configurations. The management of deployments, including day zero (0), day one (1) & day two (2) configurations, becomes a substantial task, particularly when dealing with systems at scale, each having its own set of automation tools. 

Before any significant deployment, we typically observe product design and customer service teams dispersed across various regions collaborate using numerous spreadsheets, word documents, and other tools to document the configuration values. These values eventually find their way into Helm charts or other YAML files, serving as inputs to standalone scripts or localized pipelines. Coordinating between different teams to ensure the correct configuration is applied to production deployments proves to be a significant challenge. 

There is often some coupling between the network function and the cloud platform or workload cluster it can run on. So, different Network Vendors employ unique provisioning mechanisms, each equipped with its APIs and SDKs.  And as service providers expand to additional cloud providers and workload clusters, the complexity multiplies. Despite the adoption of various cloud-native technologies, we have observed customers spending months deploying network functions to production. 

So, having a unified automation framework with common components and common workflows with standardized templates can significantly reduce many of the above problems and that solution is Nephio! 

 

What is Nephio? 

Nephio, as its goal states, is to deliver a carrier-grade, simple and open Kubernetes based cloud native intent automation. It achieves this goal by implementing a single unified platform for automation using an intent-based and declarative configuration with active reconciliation. 

 

Features of Nephio 

  • Kubernetes as underlying platform – Kubernetes as the fundamental underlying platform with its orchestration capabilities.
  • Intent Driven – It is an approach based on high-level goals rather than detailed instructions. For instance, deploy a network function rather than providing step-by-step instructions to provision a NF. Nephio enables users to articulate high-level objectives transitioning away from manual granular configurations. This intent-driven approach simplifies network function automation, making it user-friendly and less error-prone.
  • Declarative & CaD – The automation is declarative with the help of Configuration-as-Data; it understands the user’s intent and helps setup the cloud and edge workloads. Configurations are managed in a standard way by kpt packaging. A declarative system is one which continuously evaluate current state with intended state and reconcile to realize the intended state.
  • Reconciliation – Control loop to ensure the running state always matches with the declaration and avoid any configuration drift. This is achieved by powerful Kubernetes CRD extensions and operator patterns.
  • Distributed Actuation – Distributed management of workloads so that the system is resilient. For example, CRD in edge cluster to manage workloads deployed on edge.
  • Gitops at heart – Nephio enables version control, collaboration and compliance, making network function management both efficient and transparent by embracing the principles of Gitops. 

 

Nephio’s approach to tackling complexity. 

Nephio is designed to handle the complexity of multi-vendor, multi-site deployments efficiently. Its Kubernetes-based framework acts as a unified control plane, offering a standardized method to manage different network functions irrespective of the underlying vendor or site-specific peculiarities. This uniform approach eliminates the need for bespoke solutions for each vendor, streamlining the entire process. 

By centralizing the management of network functions across various sites, Nephio enables seamless coordination and deployment, ensuring consistency and reducing the risk of configuration drift. 

One of the key innovations of Nephio is its adoption of machine-manipulable configurations. This approach facilitates automated, programmable, and repeatable configurations, which are crucial in managing complex network environments. 

 

Nephio Architecture for sample 5G Core deployment on multi-vendor cloud platforms
Figure 1: Nephio Architecture for sample 5G Core deployment on multi-vendor cloud platforms

 

How does Nephio fit with existing orchestration solutions?

Nephio primarily focuses on domain and infrastructure orchestration in conformance with O-RAN and 3GPP, with the help of Kubernetes and its CRDs. It will complement many of the existing open-source projects like ONAP in service orchestration to provide an e2e automation in telecommunication networking.

Changing every existing automation layer is not feasible. However, integration with Nephio is possible for any Kubernetes-based systems that work with KRM. We know today most of the cloud-native deployments use helm charts, which produces a manifest that cannot alter on runtime, making reconciliation a difficult choice. With Nephio, Network Vendors can still use their helm charts to provision network functions by implementing a helm operator; though it will not reap all the benefits of Nephio, it certainly can facilitate organizations to adopting Nephio swiftly.

 

Zinkworks and Nephio 

Zinkworks is currently engaged in dynamic research efforts focused on Nephio, which involves the implementation and execution of a diverse range of use cases across multiple cloud platforms. Our partnership with Google enables us to provide unparalleled expertise and support for Nephio projects. Through our collaboration with industry leaders, we are well-positioned to deliver cutting-edge solutions and drive innovation in the field of Nephio. Learn how your business can leverage Nephio by speaking with our Zinkworks team, contact marketing@zinkworks.com

 

References  - https://nephio.org/ 


How Generative AI is Shaping the Future of Business

In today's rapidly evolving technological landscape, artificial intelligence (AI) continues to drive innovation across various industries. One of the most exciting and transformative branches of AI is Generative AI. This cutting-edge technology has the potential to reshape the way businesses operate, create, and innovate. In this blog, we'll delve into what Generative AI is, its incredible capabilities, and how it can benefit businesses. We'll also introduce you to Zinkworks and their suite of Generative AI-powered products designed to revolutionize your business.
 

Understanding Generative AI 

Generative artificial intelligence, or Generative AI for short, is a subdivision of AI that focuses on creating content autonomously. This content can take various forms, such as text, images, music, and more. What sets Generative AI apart is its ability to learn from existing data and then generate new content that mimics the patterns and structures found in the training data. 

Generative AI leverages generative models, which are neural networks designed to generate data. These models can be trained on vast datasets, allowing them to understand and replicate the intricate details of the input data. As a result, they can produce remarkably realistic and coherent output. 

Most people's first experience of generative AI was through OpenAI's ChatGPT, which reached 100 million monthly active users in just two months after its launch in late 2022. This makes it the fastest-growing consumer application in history. We are still in the very early stages of this massive technological revolution as seen below. 

 

The History of AI

 

The Power of Generative AI for Businesses 

Generative AI is not just a technological marvel; it's a game-changer for businesses across industries. Here are some keyways in which Generative AI can benefit companies: 

 

  1. Accelerated Innovation

Generative AI can significantly speed up the innovation process. With the help of tools like Zinkworks' rApp Studio, business users can create applications without writing a single line of code. This means that domain experts can focus on their core competencies and rapidly bring their ideas to life, driving innovation at a breathtaking pace. 

An individual seated at a desk, intently working, using Zinkworks Gen AI rApp Studio, in a professional setting.

Learn more>
 

  1. Enhanced Productivity and Security

Developers can take advantage of solutions like Zinkworks' F.A.S.T. to streamline the integration of open-source software packages. By automating package compatibility checks and suggesting validated alternatives, F.A.S.T. improves developer productivity, reduces security risks, and accelerates time-to-market. 

Illustration: Open source software emblem showcasing a globe with interconnected nodes, representing the freedom to use and distribute software.

Learn more>
 

  1. In-Depth Market Insights

Market Analysis Reporter (MAR), another Zinkworks product powered by Generative AI, empowers businesses to make data-driven decisions with ease. MAR can instantly generate comprehensive customer intelligence reports, providing real-time insights into competitors, markets, and customers. This centralized intelligence platform enhances efficiency, accuracy, and collaboration among teams. 

An individual engrossed in his work, utilizing a laptop to examine business data analytics on the screen

Learn more>
 

The Future of Generative AI 

The future of Generative AI is incredibly promising. As AI algorithms and models continue to advance, we can expect even more impressive feats. Here's a glimpse of what the future may hold: 

 

  1. Personalization at Scale

Generative AI will enable businesses to deliver personalized experiences at scale. From tailored marketing content to individualized product recommendations, AI will make customer interactions more meaningful and effective. 

 

  1. Creative Content Generation

We can anticipate AI-generated content becoming even more sophisticated. Whether it's generating artwork, music, or written content, AI will play a pivotal role in creative industries, potentially blurring the lines between human and AI-generated art. 

 

  1. Increased Automation

Generative AI will automate complex tasks across various domains, from healthcare and finance to manufacturing and logistics. This automation will lead to greater efficiency, reduced costs, and improved accuracy. 

 

Unlocking the Potential with Zinkworks 

If you're eager to harness the power of Generative AI for your business, Zinkworks is here to help. Our suite of products, powered by Generative AI, can revolutionize the way you operate and innovate. Learn more about our Generative AI applications: https://zinkworks.com/gen-ai/  

 

In conclusion, Generative AI is ushering in a new era of possibilities for businesses. It's a technology that empowers innovation, enhances productivity, and offers insights that were once unimaginable. With Zinkworks' Generative AI-powered products, your business can stay ahead of the curve and thrive in this exciting AI-driven future. The time to embrace Generative AI is now, if you would like to speak with one of our team to learn more email us at marketing@zinkworks.com  


Three employees celebrating their victory at Zinkworks Hackathon

Zinkworks Inaugural Hackermonth: A Month of Innovation

Over the month of October, Zinkworks hosted its very first Hackathon. The Hackathon, termed Hackermonth, launched on the 1st of October 2023. Over the course, participants developed ideas, generated prototypes, and pitched their presentations to the company on Friday, 27th of October. 

The Hackathon: An Innovation Marathon

A Hackathon is a company-wide event that anyone can participate in. The organisers propose a theme and participants develop ideas and prototypes based on that theme. Hackathons can span a single evening, a weekend or a couple of weeks and are opportunities for people to get together, network and, most importantly, have some fun. Here is an example of a Hackathon here.

Zinkworks laid out three themes for the Hackermonth, given a services company with many diverse areas of expertise, it was fitting to format themes to reflect these areas. Spot & solve, would identify a problem in each domain and create a solution. Cross-Pollinate takes a familiar solution and apply it to a new domain. And Pick’n’pitch would take an idea from a pre-defined list of problem statements we provide and devise a solution to it. 

Why Zinkworks Embarked on a Hackathon Adventure 

Firstly, Hackathons are a big part the software industry. Many Zinkworks employees attended Hackathons in previous companies and saw them as a great opportunity to make new relationships and get together with their colleagues outside of work. 

Secondly, Zinkworks recently setup a cross-functional innovation team called The Foundry. This team incorporates Product, Engineering, R&D, and Marketing to generate new business ideas expand its footprint. The best innovation teams are only as good as the people around them, so the team recognised that involving all employees in Zinkworks early on was crucial to success. The Hackathon was the first big step to bring people through the idea process and give them a taste for innovation work. 

 

Zinkworks’ Hackermonth Highlights 

Sparking Ideas 

The Hackermonth kicked off with an Ideation workshop. 22 people joined the online workshop to brainstorm and develop ideas to pitch at the end of the Hackermonth. There were some great (and some wacky) ideas. Here are some to name but a few: 

  • Christopher came up with an AI Market Trend Analyzer that curated articles based on user-defined categories and deliver them straight to your inbox. 
  • Jakub wanted to create a tool that supported database tuning for optimal performance. His initial research indicated that current solutions such as pgtune could be much more effective. 
  • Aleksei's idea was to build a dashboard and analytics tool to predict fuel supply based on information such as weather conditions. 
Crafting Prototypes 

The 2nd workshop brought participants through developing a prototype for their specific idea. 

They crafted vision statements by writing down important details such as key users, type of industry, problem-solution fit and uniqueness. Then used ChatGPT to develop a quick prototype using the preferred programming language such as bash or python. 

The Elevator Pitch 

The final workshop focused on developing everyone's pitches. This workshop explored what not to do when presenting and looked at some of the best practices when pitching an idea such keeping it brief, not looking at the slides, and practicing as much as possible. 

Pitch Day Revelations 

On Pitch Day, all participates gathered at Zinkworks’ Athlone office on Friday 27th October to kick-off the pitch. We had the privilege of seeing the great ideas that competitors put hours of work into during October. And as usual, pizza eclipsed the evening. 

Clevermiles - James McNamara

James presented a project that monitors a person’s driving habits and records their behaviour in relation to speeding, swerving, braking, and accelerating. Good behaviours bag rewards which in-turn encourages more good behaviour. The idea is largely applicable to the insurance industry and it's something that James has extensively explored in the past. 

Zinkworks Tool Directory - Paul O'Gara 

Paul developed an idea to create a tools directory for the company. This portal displays the core set of tools used in the company and contain functions to collaborate and share personal tools and tips to other people. ZTD provides a solution to the problem of having lots of tools and ideas but nowhere to share them with colleagues. 

Kata Generator - Tristan Gutierrez

Tristan’s Kata Generator helps software developers with end-to-end test generation. Using generative AI, Kata Generator takes the user’s code and generates scenarios to help the developer create effective Karate tests. This lowers the bar for anyone starting with the Karate framework and greatly increases the speed of test generation to build better tests faster. 

JournAI - Team Bounty Hunters 

Team Bounty Hunters are Zinkworks' 2023 apprentice team. They developed an impressive application that uses generative AI to suggest alternative routes based on the user's interests. For example, someone interested in Castles could use JournAI to create a journey that passes by Irish castles and monuments. 

Auto AI - Krishan Ravisanka

Auto AI is a personalised car assistant, using the huge technological potential of the car's software and turning it into a personal device. Auto AI provides end-to-end insights for the user on their journey, from the moment they enter the car to the moment they park. 

Edge Runner - Rohit Raveendran

Rohit's Edge Runner is a concept that uses the collective power of small devices (such as Raspberry Pis) in each location to run Natural Language models. Using devices that are connected to edge networks reduces the need for extensive cloud-based infrastructure. This concept is applicable to smart home applications and IoT device clusters that want to use the full potential of NLP. 

Winners 

The Hackermonth had 3 categories of winners. Best Concept was awarded to James’ Clevermiles for its originality and problem-solution fit. Best Prototype was awarded to JournAI by Team Bounty Hunters for their degree of functionality and alignment with their concept. Finally, the Overall Winner prize was awarded to Tristan's Kata Generator for the fantastic quality of his pitch, live demo and innovativeness using the latest AI technologies. 


Building Visual Dashboards in the Cloud

Big data is revolutionising how businesses operate, transforming decision-making processes across industries. Cloud technology has emerged as the catalyst to empower organisations, regardless of size, to effortlessly manage vast volumes of data. This technological advancement not only slashes the maintenance overheads linked to massive datasets but removes the necessity of recruiting and training specialised IT professionals. Even when dealing with large amounts of data, businesses can leverage cloud platforms' scalable computational and analytical capabilities to create engaging dashboards and uncover valuable new insights. Let's explore the challenges and steps we need to consider when dealing with large data and machine learning models.

Aggregating Big Data for Dashboards 

As datasets are becoming increasingly complex and challenging to understand, it has become vital for decision-makers to rely on data visualisation as a means of simplifying and interpreting large amounts of valuable information. Utilising this data, businesses can effectively understand difficult concepts, identify emerging patterns, and gain data-driven insights to make informed decisions.

For instance, BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. Use built-in ML/AI and BI for insights at scale. Once the data resides in BigQuery, SQL queries come to the rescue; you can perform data analysis and get insights from all your business data. This allows companies to make decisions in real-time, streamline business reporting, and incorporate machine learning into data analysis to predict future business opportunities.

Applying Insights through Visual Best Practices 

Now having aggregated data sets, it’s time to focus on effective and intuitive visualisations. Visual best practices are key to developing informative visualisations that drive the audience to act. Here are some of the best practices for data visualisation.

  1. Know your audience and purpose: The best dashboards are built with their intended audience in mind. Ask the question, “Who am I designing this for?” and continue to check that the dashboard supports their business goals and encourages exploration.
  2. Consider display size: Where will the audience be viewing the dashboard the most? Desktop or mobile? Do some research upfront to inform your design and adjust accordingly for the best experience.
  3. Plan for fast load times: Optimise dashboards for faster load times, which can contribute to better engagement.
  4. Leverage the sweet spot: Always consider how your audience will “read” the dashboard. The dashboard should have a sensible “flow” and a logical layout of different pieces of information.
  5. Limit the number of views and colours: Consider using colour only if it enhances the analysis. Sometimes, too many colours can slow or even prevent analysis.
  6. Select useful information only: Do not overload the user with too much information. It is important to pick only data that will be useful and raise interest to the reader. Simplicity brings clarity.
  7. Add interactivity to encourage exploration: The power of dashboards lies in the author’s ability to queue up specific views for side-by-side analysis. Filters supercharge that analysis and engage your audience.
  8. Test the dashboard for usability: An important element of dashboard design is user testing. After building a prototype, ask your audience how they use the dashboard and if it helps them answer their pressing questions.

An example of a visual dashboard created by Zinkworks:

Scalable Performance on the Cloud 

Scalability is an issue in data analytics, hosting your dashboards on a cloud platform ensures scalability and responsive user interfaces, even under heavy loads. Serverless platforms like Google Cloud Run can instantly scale dashboards to accommodate thousands of users with configured autoscaling.

Content Delivery Networks (CDNs) play a vital role in optimising dashboard delivery. By caching dashboard images and data close to users, CDNs enhance the user experience. Additionally, integrations with cloud monitoring tools enable you to track dashboard usage patterns and performance, helping you identify and address potential bottlenecks promptly.

Conclusion

Harnessing the scale, flexibility, and integrations offered by cloud platforms, organisations can effectively navigate the complexities of large datasets and deliver valuable insights rapidly through visual dashboards. With the right combination of data warehousing, analytics, and hosting, the cloud serves as the indispensable foundation for businesses seeking to thrive in the era of data-driven decision-making.

If you would like to learn more about building dynamic cloud-based visual dashboards, please contact our Zinkworks team by emailing: marketing@zinkworks.com 

 

Written By: Mahdi Khosravi & Sharmistha Sodhi


A Guide to Working with Polygons in React Using Mapbox

Since my university days, I have been passionate about exploring and working with maps. It all started with my final year project, which involved developing a location-based social network. Recently, I landed a position at Zinkworks where I was presented with an intriguing opportunity to create and analyse polygons in real-time for network statistics using Mapbox in React. Although it was a thrilling and rewarding experience, it also posed some challenges. Specifically, I encountered a dearth of resources and information on the advanced features of working with Polygons in Mapbox with React. So, I decided to share my practical experience by creating an informative blog.

 

What are Polygons?

Polygons are a vital instrument in the field of mapping and geospatial analysis, as they aid in communicating information regarding the spatial relationships among diverse features within a specific area. A polygon is a two-dimensional shape consisting of multiple straight-line segments that form a closed loop. Polygons can represent various geographic features, such as land parcels, bodies of water, and administrative boundaries.

In addition to representing geographical features, polygons in maps can also be used for data visualisation and analysis. They are often used in geographic information systems (GIS) to display and analyse data that is spatially referenced. For example, Polygons have the potential to serve as a visual tool to depict the perimeters of various networks along with their respective coverage areas. Additionally, they can facilitate the analysis of network utilisation in real-time, providing insights into the performance of each network.

 

Polygon Utilisation in Zinkworks NDO

Zinkworks is a cutting-edge technology firm that is at the forefront of utilising innovative tools and techniques to streamline network monitoring. Zinkworks' product, Network Device Orchestrator (NDO), uses polygons to analyse network usage in real-time, enabling the orchestration of connected equipment, and predicting and avoiding demand spikes before they impact the network. Moreover,  NDO predicts the future position of the connected equipment and its future capacity needs, and with polygons  it presents that data in a user-friendly UI. By leveraging advanced features of polygons in Network Device Orchestrator (NDO), Zinkworks has been able to enhance its network utilisation analysis capabilities, empowering enterprise users to align their internal business rules to the available network capacity.

 

 

How to use Polygons using Mapbox in React? 

Let's install the dependencies first:

`yarn add mapbox-gl react-map-gl @types/react-map-gl`

And add the following code:

import { GeoJSONSourceOptions } from 'mapbox-gl';

import { Layer, Map, Source } from 'react-map-gl';

const polygons: GeoJSONSourceOptions['data'] = {

   type: 'FeatureCollection',

   features: [

     {

       id: 1,

       type: 'Feature',

       properties: {

         name: 'Area 51',

       },

       geometry: {

         type: 'MultiPolygon',

         coordinates: [

           [

             [

               [-7.950966710379959, 53.43146017152239],

               [-7.950966710379959, 53.42962140120139],

               [-7.947024882898944, 53.42962140120139],

               [-7.947024882898944, 53.43146017152239],

               [-7.950966710379959, 53.43146017152239],

             ],

           ],

         ],

       },

     },

   ],

 };

And add the Map component:

<Map

  mapboxAccessToken={“MAPBOX_ACCESS_TOKEN_HERE”}

  mapStyle='mapbox://styles/mapbox/satellite-streets-v9'

  style={{

    width: '100%',

    height: '100%',

  }}

  initialViewState={{

    longitude: -7.949333965946522,

    latitude: 53.4313602035036,

    zoom: 15

  }}

>

       <Source id='source' type='geojson' data={polygons}>

         <Layer

           {...{

             id: 'data',

             type: 'fill',

             paint: {

               'fill-color': 'rgb(5, 191, 5)',

               'fill-opacity': 0.4

             },

           }}

         />

         </Source>

</Map>

 

Let’s run the code and check if a polygon is displaying on the map like the one below:

 

 

Let's go through a few essential properties:

id: a unique id to identify each polygon, this can be used to target polygons like when hovering, etc.

properties: can be used to assign custom properties to polygons and subsequently access them at a later time, such as the polygon's name, value, and other relevant attributes.

geometry: used to define and establish the type of polygon and specify its coordinates.

The Layer component is a fundamental component for rendering diverse items, such as polygons and lines, with a range of unique properties, including fill-color and opacity. For example, you can use Layer to draw an outline for a polygon using the following code:

<Layer

           {...{

             id: 'outline',

             type: 'line',

             paint: {

               'line-color': 'red',

               'line-width': 2,

             },

           }}

/>

 

I hope this article was helpful, you can find the source code for different features in the following repo.

Repo URL: https://github.com/Ahmdrza/mapbox-polygons-react

To learn more about Zinkworks NDO solution, visit: 

https://zinkworks.com/solutions/

Happy Coding!

 

Written By: Ahmad Raza


How can we bridge the chasm of Private 5G?

Private 5G offers many benefits for industries that require low latency, high reliability, and large bandwidth, such as manufacturing, healthcare, and transportation. However, despite the potential, its industry adoption has been slower than expected, mainly limited to early adopters in the industrial and academic innovation sectors. In this blog post, we will discuss some of the reasons for this slow adoption and how they can be overcome.

The Challenge  

One of the main reasons for the slow adoption of private or dedicated 5G networks is the lack of trust in 5G as a new technology. Many enterprises need more evidence of its reliability, especially when it involves sensitive data or critical operations, specifically in an industrial setting. The value of installing or upgrading networks to 5G seems to be mostly appreciated, but still, enterprises are falling back on proven network types for critical use cases.   

Another reason for the slow adoption of private 5G networks is the lack of industrial devices that support 5G connectivity. Although 5G-enabled smartphones and tablets are becoming more common, many industrial devices, such as sensors, cameras, robots, and AMRs, are still using legacy technologies. This limits the use cases and applications that can benefit from private 5G networks.  

A third reason for the slow adoption of private 5G networks is the lack of understanding regarding the benefits of 5G versus the capacities of existing technologies, such as Wi-Fi 6 or LTE. Many enterprises may not see the need to switch to private 5G networks if they are satisfied with their current wireless solutions or do not have demanding requirements. However, many enterprises do not have the tools or experience to understand which network type would best suit their current or future needs. Are Wi-Fi handover issues a challenge for the model of AGV planned? Is 5G needed to support new ML vision systems with onboard computing?   

Private 5G networks have great potential to transform industries and enable new levels of productivity, efficiency, and innovation. However, their adoption in the industry has been slow due to various challenges, such as lack of trust, lack of devices, and lack of understanding. To overcome these challenges, enterprises need to be educated on the opportunities and Industry 4.0 use cases that can be unlocked by not only 5G technology but hybrid deployments of 5G, Wi-Fi and other network technology.   

The Solution  

Often the critical element driving network planning is the accurate projection of the enterprise's future use cases and their impact on the network. It requires a full understanding of both capabilities of the various network options and demand-side data throughput patterns.   

The Zinkworks Networked Device Simulator (NDS) is built to address this challenge. It enables CSPs and connectivity resellers to rapidly, simply, and cheaply model single or multi-network deployments, such as Private 5G and Wi-Fi 6, and showcase their performance and suitability for a client's current or proposed industrial use cases.   

By combining data on both network capabilities and use case demands in a single 3D simulation, Sales Teams can easily drag and drop network infrastructure and networked equipment to visually demonstrate the impact of robots, vision systems, time-sensitive manufacturing systems, safety-critical applications, etc., in a virtual replication of a client's facility.   

Through using a bespoke, relatable, and accurate visual model of an enterprise's network needs, trust can be established that the proposed network solution is the right fit for their needs.   

If you would like to learn more about Zinkworks Networked Device Simulator send us a request on our contact page and our team will get back to you: www.zinkworks.com/contact/ 

Written by James McNamara.

.