How Generative AI is Shaping the Future of Business
In today's rapidly evolving technological landscape, artificial intelligence (AI) continues to drive innovation across various industries. One of the most exciting and transformative branches of AI is Generative AI. This cutting-edge technology has the potential to reshape the way businesses operate, create, and innovate. In this blog, we'll delve into what Generative AI is, its incredible capabilities, and how it can benefit businesses. We'll also introduce you to Zinkworks and their suite of Generative AI-powered products designed to revolutionize your business.
Understanding Generative AI
Generative artificial intelligence, or Generative AI for short, is a subdivision of AI that focuses on creating content autonomously. This content can take various forms, such as text, images, music, and more. What sets Generative AI apart is its ability to learn from existing data and then generate new content that mimics the patterns and structures found in the training data.
Generative AI leverages generative models, which are neural networks designed to generate data. These models can be trained on vast datasets, allowing them to understand and replicate the intricate details of the input data. As a result, they can produce remarkably realistic and coherent output.
Most people's first experience of generative AI was through OpenAI's ChatGPT, which reached 100 million monthly active users in just two months after its launch in late 2022. This makes it the fastest-growing consumer application in history. We are still in the very early stages of this massive technological revolution as seen below.
The Power of Generative AI for Businesses
Generative AI is not just a technological marvel; it's a game-changer for businesses across industries. Here are some keyways in which Generative AI can benefit companies:
- Accelerated Innovation
Generative AI can significantly speed up the innovation process. With the help of tools like Zinkworks' rApp Studio, business users can create applications without writing a single line of code. This means that domain experts can focus on their core competencies and rapidly bring their ideas to life, driving innovation at a breathtaking pace.
- Enhanced Productivity and Security
Developers can take advantage of solutions like Zinkworks' F.A.S.T. to streamline the integration of open-source software packages. By automating package compatibility checks and suggesting validated alternatives, F.A.S.T. improves developer productivity, reduces security risks, and accelerates time-to-market.
- In-Depth Market Insights
Market Analysis Reporter (MAR), another Zinkworks product powered by Generative AI, empowers businesses to make data-driven decisions with ease. MAR can instantly generate comprehensive customer intelligence reports, providing real-time insights into competitors, markets, and customers. This centralized intelligence platform enhances efficiency, accuracy, and collaboration among teams.
The Future of Generative AI
The future of Generative AI is incredibly promising. As AI algorithms and models continue to advance, we can expect even more impressive feats. Here's a glimpse of what the future may hold:
- Personalization at Scale
Generative AI will enable businesses to deliver personalized experiences at scale. From tailored marketing content to individualized product recommendations, AI will make customer interactions more meaningful and effective.
- Creative Content Generation
We can anticipate AI-generated content becoming even more sophisticated. Whether it's generating artwork, music, or written content, AI will play a pivotal role in creative industries, potentially blurring the lines between human and AI-generated art.
- Increased Automation
Generative AI will automate complex tasks across various domains, from healthcare and finance to manufacturing and logistics. This automation will lead to greater efficiency, reduced costs, and improved accuracy.
Unlocking the Potential with Zinkworks
If you're eager to harness the power of Generative AI for your business, Zinkworks is here to help. Our suite of products, powered by Generative AI, can revolutionize the way you operate and innovate. Learn more about our Generative AI applications: https://zinkworks.com/gen-ai/
In conclusion, Generative AI is ushering in a new era of possibilities for businesses. It's a technology that empowers innovation, enhances productivity, and offers insights that were once unimaginable. With Zinkworks' Generative AI-powered products, your business can stay ahead of the curve and thrive in this exciting AI-driven future. The time to embrace Generative AI is now, if you would like to speak with one of our team to learn more email us at marketing@zinkworks.com
Zinkworks Inaugural Hackermonth: A Month of Innovation
Over the month of October, Zinkworks hosted its very first Hackathon. The Hackathon, termed Hackermonth, launched on the 1st of October 2023. Over the course, participants developed ideas, generated prototypes, and pitched their presentations to the company on Friday, 27th of October.
The Hackathon: An Innovation Marathon
A Hackathon is a company-wide event that anyone can participate in. The organisers propose a theme and participants develop ideas and prototypes based on that theme. Hackathons can span a single evening, a weekend or a couple of weeks and are opportunities for people to get together, network and, most importantly, have some fun. Here is an example of a Hackathon here.
Zinkworks laid out three themes for the Hackermonth, given a services company with many diverse areas of expertise, it was fitting to format themes to reflect these areas. Spot & solve, would identify a problem in each domain and create a solution. Cross-Pollinate takes a familiar solution and apply it to a new domain. And Pick’n’pitch would take an idea from a pre-defined list of problem statements we provide and devise a solution to it.
Why Zinkworks Embarked on a Hackathon Adventure
Firstly, Hackathons are a big part the software industry. Many Zinkworks employees attended Hackathons in previous companies and saw them as a great opportunity to make new relationships and get together with their colleagues outside of work.
Secondly, Zinkworks recently setup a cross-functional innovation team called The Foundry. This team incorporates Product, Engineering, R&D, and Marketing to generate new business ideas expand its footprint. The best innovation teams are only as good as the people around them, so the team recognised that involving all employees in Zinkworks early on was crucial to success. The Hackathon was the first big step to bring people through the idea process and give them a taste for innovation work.
Zinkworks’ Hackermonth Highlights
Sparking Ideas
The Hackermonth kicked off with an Ideation workshop. 22 people joined the online workshop to brainstorm and develop ideas to pitch at the end of the Hackermonth. There were some great (and some wacky) ideas. Here are some to name but a few:
- Christopher came up with an AI Market Trend Analyzer that curated articles based on user-defined categories and deliver them straight to your inbox.
- Jakub wanted to create a tool that supported database tuning for optimal performance. His initial research indicated that current solutions such as pgtune could be much more effective.
- Aleksei's idea was to build a dashboard and analytics tool to predict fuel supply based on information such as weather conditions.
Crafting Prototypes
The 2nd workshop brought participants through developing a prototype for their specific idea.
They crafted vision statements by writing down important details such as key users, type of industry, problem-solution fit and uniqueness. Then used ChatGPT to develop a quick prototype using the preferred programming language such as bash or python.
The Elevator Pitch
The final workshop focused on developing everyone's pitches. This workshop explored what not to do when presenting and looked at some of the best practices when pitching an idea such keeping it brief, not looking at the slides, and practicing as much as possible.
Pitch Day Revelations
On Pitch Day, all participates gathered at Zinkworks’ Athlone office on Friday 27th October to kick-off the pitch. We had the privilege of seeing the great ideas that competitors put hours of work into during October. And as usual, pizza eclipsed the evening.
Clevermiles - James McNamara
James presented a project that monitors a person’s driving habits and records their behaviour in relation to speeding, swerving, braking, and accelerating. Good behaviours bag rewards which in-turn encourages more good behaviour. The idea is largely applicable to the insurance industry and it's something that James has extensively explored in the past.
Zinkworks Tool Directory - Paul O'Gara
Paul developed an idea to create a tools directory for the company. This portal displays the core set of tools used in the company and contain functions to collaborate and share personal tools and tips to other people. ZTD provides a solution to the problem of having lots of tools and ideas but nowhere to share them with colleagues.
Kata Generator - Tristan Gutierrez
Tristan’s Kata Generator helps software developers with end-to-end test generation. Using generative AI, Kata Generator takes the user’s code and generates scenarios to help the developer create effective Karate tests. This lowers the bar for anyone starting with the Karate framework and greatly increases the speed of test generation to build better tests faster.
JournAI - Team Bounty Hunters
Team Bounty Hunters are Zinkworks' 2023 apprentice team. They developed an impressive application that uses generative AI to suggest alternative routes based on the user's interests. For example, someone interested in Castles could use JournAI to create a journey that passes by Irish castles and monuments.
Auto AI - Krishan Ravisanka
Auto AI is a personalised car assistant, using the huge technological potential of the car's software and turning it into a personal device. Auto AI provides end-to-end insights for the user on their journey, from the moment they enter the car to the moment they park.
Edge Runner - Rohit Raveendran
Rohit's Edge Runner is a concept that uses the collective power of small devices (such as Raspberry Pis) in each location to run Natural Language models. Using devices that are connected to edge networks reduces the need for extensive cloud-based infrastructure. This concept is applicable to smart home applications and IoT device clusters that want to use the full potential of NLP.
Winners
The Hackermonth had 3 categories of winners. Best Concept was awarded to James’ Clevermiles for its originality and problem-solution fit. Best Prototype was awarded to JournAI by Team Bounty Hunters for their degree of functionality and alignment with their concept. Finally, the Overall Winner prize was awarded to Tristan's Kata Generator for the fantastic quality of his pitch, live demo and innovativeness using the latest AI technologies.
Building Visual Dashboards in the Cloud
Big data is revolutionising how businesses operate, transforming decision-making processes across industries. Cloud technology has emerged as the catalyst to empower organisations, regardless of size, to effortlessly manage vast volumes of data. This technological advancement not only slashes the maintenance overheads linked to massive datasets but removes the necessity of recruiting and training specialised IT professionals. Even when dealing with large amounts of data, businesses can leverage cloud platforms' scalable computational and analytical capabilities to create engaging dashboards and uncover valuable new insights. Let's explore the challenges and steps we need to consider when dealing with large data and machine learning models.
Aggregating Big Data for Dashboards
As datasets are becoming increasingly complex and challenging to understand, it has become vital for decision-makers to rely on data visualisation as a means of simplifying and interpreting large amounts of valuable information. Utilising this data, businesses can effectively understand difficult concepts, identify emerging patterns, and gain data-driven insights to make informed decisions.
For instance, BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. Use built-in ML/AI and BI for insights at scale. Once the data resides in BigQuery, SQL queries come to the rescue; you can perform data analysis and get insights from all your business data. This allows companies to make decisions in real-time, streamline business reporting, and incorporate machine learning into data analysis to predict future business opportunities.
Applying Insights through Visual Best Practices
Now having aggregated data sets, it’s time to focus on effective and intuitive visualisations. Visual best practices are key to developing informative visualisations that drive the audience to act. Here are some of the best practices for data visualisation.
- Know your audience and purpose: The best dashboards are built with their intended audience in mind. Ask the question, “Who am I designing this for?” and continue to check that the dashboard supports their business goals and encourages exploration.
- Consider display size: Where will the audience be viewing the dashboard the most? Desktop or mobile? Do some research upfront to inform your design and adjust accordingly for the best experience.
- Plan for fast load times: Optimise dashboards for faster load times, which can contribute to better engagement.
- Leverage the sweet spot: Always consider how your audience will “read” the dashboard. The dashboard should have a sensible “flow” and a logical layout of different pieces of information.
- Limit the number of views and colours: Consider using colour only if it enhances the analysis. Sometimes, too many colours can slow or even prevent analysis.
- Select useful information only: Do not overload the user with too much information. It is important to pick only data that will be useful and raise interest to the reader. Simplicity brings clarity.
- Add interactivity to encourage exploration: The power of dashboards lies in the author’s ability to queue up specific views for side-by-side analysis. Filters supercharge that analysis and engage your audience.
- Test the dashboard for usability: An important element of dashboard design is user testing. After building a prototype, ask your audience how they use the dashboard and if it helps them answer their pressing questions.
An example of a visual dashboard created by Zinkworks:
Scalable Performance on the Cloud
Scalability is an issue in data analytics, hosting your dashboards on a cloud platform ensures scalability and responsive user interfaces, even under heavy loads. Serverless platforms like Google Cloud Run can instantly scale dashboards to accommodate thousands of users with configured autoscaling.
Content Delivery Networks (CDNs) play a vital role in optimising dashboard delivery. By caching dashboard images and data close to users, CDNs enhance the user experience. Additionally, integrations with cloud monitoring tools enable you to track dashboard usage patterns and performance, helping you identify and address potential bottlenecks promptly.
Conclusion
Harnessing the scale, flexibility, and integrations offered by cloud platforms, organisations can effectively navigate the complexities of large datasets and deliver valuable insights rapidly through visual dashboards. With the right combination of data warehousing, analytics, and hosting, the cloud serves as the indispensable foundation for businesses seeking to thrive in the era of data-driven decision-making.
If you would like to learn more about building dynamic cloud-based visual dashboards, please contact our Zinkworks team by emailing: marketing@zinkworks.com
Written By: Mahdi Khosravi & Sharmistha Sodhi
A Guide to Working with Polygons in React Using Mapbox
Since my university days, I have been passionate about exploring and working with maps. It all started with my final year project, which involved developing a location-based social network. Recently, I landed a position at Zinkworks where I was presented with an intriguing opportunity to create and analyse polygons in real-time for network statistics using Mapbox in React. Although it was a thrilling and rewarding experience, it also posed some challenges. Specifically, I encountered a dearth of resources and information on the advanced features of working with Polygons in Mapbox with React. So, I decided to share my practical experience by creating an informative blog.
What are Polygons?
Polygons are a vital instrument in the field of mapping and geospatial analysis, as they aid in communicating information regarding the spatial relationships among diverse features within a specific area. A polygon is a two-dimensional shape consisting of multiple straight-line segments that form a closed loop. Polygons can represent various geographic features, such as land parcels, bodies of water, and administrative boundaries.
In addition to representing geographical features, polygons in maps can also be used for data visualisation and analysis. They are often used in geographic information systems (GIS) to display and analyse data that is spatially referenced. For example, Polygons have the potential to serve as a visual tool to depict the perimeters of various networks along with their respective coverage areas. Additionally, they can facilitate the analysis of network utilisation in real-time, providing insights into the performance of each network.
Polygon Utilisation in Zinkworks NDO
Zinkworks is a cutting-edge technology firm that is at the forefront of utilising innovative tools and techniques to streamline network monitoring. Zinkworks' product, Network Device Orchestrator (NDO), uses polygons to analyse network usage in real-time, enabling the orchestration of connected equipment, and predicting and avoiding demand spikes before they impact the network. Moreover, NDO predicts the future position of the connected equipment and its future capacity needs, and with polygons it presents that data in a user-friendly UI. By leveraging advanced features of polygons in Network Device Orchestrator (NDO), Zinkworks has been able to enhance its network utilisation analysis capabilities, empowering enterprise users to align their internal business rules to the available network capacity.
How to use Polygons using Mapbox in React?
Let's install the dependencies first:
`yarn add mapbox-gl react-map-gl @types/react-map-gl`
And add the following code:
import { GeoJSONSourceOptions } from 'mapbox-gl';
import { Layer, Map, Source } from 'react-map-gl';
const polygons: GeoJSONSourceOptions['data'] = {
type: 'FeatureCollection',
features: [
{
id: 1,
type: 'Feature',
properties: {
name: 'Area 51',
},
geometry: {
type: 'MultiPolygon',
coordinates: [
[
[
[-7.950966710379959, 53.43146017152239],
[-7.950966710379959, 53.42962140120139],
[-7.947024882898944, 53.42962140120139],
[-7.947024882898944, 53.43146017152239],
[-7.950966710379959, 53.43146017152239],
],
],
],
},
},
],
};
And add the Map component:
<Map
mapboxAccessToken={“MAPBOX_ACCESS_TOKEN_HERE”}
mapStyle='mapbox://styles/mapbox/satellite-streets-v9'
style={{
width: '100%',
height: '100%',
}}
initialViewState={{
longitude: -7.949333965946522,
latitude: 53.4313602035036,
zoom: 15
}}
>
<Source id='source' type='geojson' data={polygons}>
<Layer
{...{
id: 'data',
type: 'fill',
paint: {
'fill-color': 'rgb(5, 191, 5)',
'fill-opacity': 0.4
},
}}
/>
</Source>
</Map>
Let’s run the code and check if a polygon is displaying on the map like the one below:
Let's go through a few essential properties:
id: a unique id to identify each polygon, this can be used to target polygons like when hovering, etc.
properties: can be used to assign custom properties to polygons and subsequently access them at a later time, such as the polygon's name, value, and other relevant attributes.
geometry: used to define and establish the type of polygon and specify its coordinates.
The Layer component is a fundamental component for rendering diverse items, such as polygons and lines, with a range of unique properties, including fill-color and opacity. For example, you can use Layer to draw an outline for a polygon using the following code:
<Layer
{...{
id: 'outline',
type: 'line',
paint: {
'line-color': 'red',
'line-width': 2,
},
}}
/>
I hope this article was helpful, you can find the source code for different features in the following repo.
Repo URL: https://github.com/Ahmdrza/mapbox-polygons-react
To learn more about Zinkworks NDO solution, visit:
https://zinkworks.com/solutions/
Happy Coding!
Written By: Ahmad Raza
How can we bridge the chasm of Private 5G?
Private 5G offers many benefits for industries that require low latency, high reliability, and large bandwidth, such as manufacturing, healthcare, and transportation. However, despite the potential, its industry adoption has been slower than expected, mainly limited to early adopters in the industrial and academic innovation sectors. In this blog post, we will discuss some of the reasons for this slow adoption and how they can be overcome.
The Challenge
One of the main reasons for the slow adoption of private or dedicated 5G networks is the lack of trust in 5G as a new technology. Many enterprises need more evidence of its reliability, especially when it involves sensitive data or critical operations, specifically in an industrial setting. The value of installing or upgrading networks to 5G seems to be mostly appreciated, but still, enterprises are falling back on proven network types for critical use cases.
Another reason for the slow adoption of private 5G networks is the lack of industrial devices that support 5G connectivity. Although 5G-enabled smartphones and tablets are becoming more common, many industrial devices, such as sensors, cameras, robots, and AMRs, are still using legacy technologies. This limits the use cases and applications that can benefit from private 5G networks.
A third reason for the slow adoption of private 5G networks is the lack of understanding regarding the benefits of 5G versus the capacities of existing technologies, such as Wi-Fi 6 or LTE. Many enterprises may not see the need to switch to private 5G networks if they are satisfied with their current wireless solutions or do not have demanding requirements. However, many enterprises do not have the tools or experience to understand which network type would best suit their current or future needs. Are Wi-Fi handover issues a challenge for the model of AGV planned? Is 5G needed to support new ML vision systems with onboard computing?
Private 5G networks have great potential to transform industries and enable new levels of productivity, efficiency, and innovation. However, their adoption in the industry has been slow due to various challenges, such as lack of trust, lack of devices, and lack of understanding. To overcome these challenges, enterprises need to be educated on the opportunities and Industry 4.0 use cases that can be unlocked by not only 5G technology but hybrid deployments of 5G, Wi-Fi and other network technology.
The Solution
Often the critical element driving network planning is the accurate projection of the enterprise's future use cases and their impact on the network. It requires a full understanding of both capabilities of the various network options and demand-side data throughput patterns.
The Zinkworks Networked Device Simulator (NDS) is built to address this challenge. It enables CSPs and connectivity resellers to rapidly, simply, and cheaply model single or multi-network deployments, such as Private 5G and Wi-Fi 6, and showcase their performance and suitability for a client's current or proposed industrial use cases.
By combining data on both network capabilities and use case demands in a single 3D simulation, Sales Teams can easily drag and drop network infrastructure and networked equipment to visually demonstrate the impact of robots, vision systems, time-sensitive manufacturing systems, safety-critical applications, etc., in a virtual replication of a client's facility.
Through using a bespoke, relatable, and accurate visual model of an enterprise's network needs, trust can be established that the proposed network solution is the right fit for their needs.
If you would like to learn more about Zinkworks Networked Device Simulator send us a request on our contact page and our team will get back to you: www.zinkworks.com/contact/
Written by James McNamara.
.
Exploring the Intersection of Industry 4.0 and 5G
As you read the title, I’m sure you have heard these words from a blog or talk before. I bet they feel like the latest series of buzzwords. After reading this article, you will become more familiar with these terms and prove that these are not just empty buzzwords. Industry 4.0 and 5G are two innovative technological advancements that have the potential to change the way we live and work. Industry 4.0 refers to the fourth industrial revolution and is characterised by integrating advanced technologies such as artificial intelligence, the Internet of Things (IoT), and robotics into traditional manufacturing and industrial processes. 5G, on the other hand, is the fifth generation of mobile networks that promises to provide faster and more reliable communication than previous generations.
Industry 4.0 has the potential to revolutionize manufacturing and production processes by integrating IoT, advanced sensors, and AI. This leads to improved production speed, reduced costs, and increased competitiveness. In addition, Industry 4.0 enables companies to collect and analyse vast amounts of data, which can be used to improve the quality of their products and services. With the use of robotics and automation in the production process, highly automated factories can operate with minimal human intervention, 24/7, leading to increased productivity, improved quality, and reduced costs.
The integration of IoT in Industry 4.0 allows for the creation of smart factories, where machines and devices are equipped with real-time sensors that collect and transmit data. This data can then be analysed using AI algorithms, which can optimize production processes, improve quality, and reduce downtime. Furthermore, the use of advanced digital technologies such as 3D printing, augmented and virtual reality, and cloud computing can help companies design, test, and produce products more efficiently and cost-effectively. Despite the potential loss of jobs due to automation and the need for workers to adapt to new technologies and skills, Industry 4.0 is expected to significantly impact the global economy and how we live and work.
5G provides faster and more reliable communication, which is crucial for the successful implementation of Industry 4.0. With 5G, manufacturers can communicate with their machines and devices in real-time, allowing for improved monitoring, control, and automation of the production process. This can lead to increased efficiency and enhanced productivity. The key benefits of 5G include increased speed, reduced latency, and improved capacity, allowing for the connection of many devices simultaneously. This enhanced capacity will be crucial for successfully implementing the Internet of Things (IoT), as it will allow for the connection of millions of devices, from smart homes to industrial equipment. It has the potential to revolutionise industries such as healthcare, education, and entertainment, allowing for the creation of new products and services that were previously not possible.
5G is a significant technological advancement that has the potential to impact the way we live and work. However, it also raises some concerns, such as the need for substantial investments in infrastructure and the potential for security and privacy issues. Despite these challenges, 5G is expected to significantly impact the global economy and provide new opportunities for innovation and growth.
In conclusion, Industry 4.0 and 5G are two technological advancements that can change the way we live and work. Industry 4.0 has the potential to revolutionize manufacturing and production processes, leading to increased efficiency, improved quality, and reduced costs. With the help of 5G, manufacturers can communicate with their machines and devices in real-time, leading to improved monitoring, control, and automation of the production process. Despite the potential challenges associated with these advancements, such as the loss of jobs due to automation, Industry 4.0 and 5G are expected to significantly impact the global economy and how we live and work.
At Zinkworks we have created a product called Networked Device Orchestrator (NDO) which is purpose-built for Industry 4.0. It is designed to enable the orchestration of connected equipment, predicting and avoiding demand spikes before they impact the network, and empowering enterprise users to align their internal business rules to the available network capacity. Learn more: www.zinkworks.com/solutions/
Written by Aaron Fortune.
Kubernetes Operators: When not to create one
Over the past few years in the realm of DevOps and Kubernetes in particular, Kubernetes Operator pattern has been a trending topic. In my personal experience, while working with different projects for different applications, it was evident that some of the software development teams, or organizations were too quick to jump on the Kubernetes Operator bandwagon, without analyzing the real-world problem they were trying to solve through the implementation of a Kubernetes Operator.
More often than not, the implementation of a kubernetes operator was done only because it was understood as the most trending topic or the implementation pattern in the kubernetes domain at that point. I have seen certain software development teams going to the extent where they implement different wrapper CRDs, or controllers around an open source kubernetes operator, so that certain organizational practices and standards were encapsulated to these wrapper controllers while giving the organizations the ability to deploy and use the open-source Kubernetes Operator in their application stack. If simply put, all that cost, time and effort from the software developers were invested in these kinds of implementations only to give the ability to use a specific, well known Kubernetes Operator in their application stack, while they had completely forgotten why anyone should really have a Kubernetes Operator or what actual problems a Kubernetes Operator is supposed to solve fundamentally in your application.
The purpose of this article is not to criticize the use of Kubernetes Operator pattern or manifest it as a bad practice. It is in fact quite the opposite. Kubernetes Operator pattern is a highly useful concept in a Kubernetes stack, which helps devops engineers or software developers to tackle certain complexities of the application lifecycle while deployed on a Kubernetes platform. What we rather intend to illustrate through this article are the actual real world use cases that Kubernetes Operator pattern is supposed to solve, by first discussing the use cases which might not be applicable for it.
Let us first look at what is meant as an operator pattern in Kubernetes. There are three key points we should aim to suffice through an operator implementation.
- It should be a piece of software that automates a repeatable task of a stateful application and replaces the necessity of a human operator.
- It should be a software extension to the Kubernetes API that makes use of a Custom Resource.
- It should follow the Kubernetes principals, mainly the Control Loop.
While all three points above have an equal importance, we think the most important point out of three is the first one.
Now let us jump into our main topic, when should we not create an operator, or if we try to rephrase it in a more diplomatic way, “When should we rethink our decision to write an operator”.
Is my application a stateful application?
Even though official documentation for Kubernetes on operator pattern does not strictly mention about the application you are building the operator for, to be a stateful application, what we should understand is, an application which does not have a state will not really require some controller to handle its deployment lifecycle. Notice how we have used the term “controller” here instead of “operator”. We will discuss this in a moment.
The simple reason being, if a particular application does not have a state, what it really means is you can pretty much use the native features of Kubernetes such as a deployment controller to handle the full life cycle of that application. But when an application is a stateful application then there is a possibility that you cannot freely replace a given replica of that particular application instance with a new replica (like it is supposed to happen in the kubernetes world all the time).
There is a chance some additional work such as leader elections, handling checkpoints, managing quorums, or restoring a backup is there that must be done when bringing up a new replica. Now, some of these tasks may not essentially be a part of the application code itself, maybe these are some manual steps that a human operator must execute. So, this is where the real use of an operator on kubernetes can pay off well. You can write a piece of software to encapsulate all that domain / application specific logic and then combine it to the Kubernetes API as an extension.
Where do the CRDs and Control Loop come into play in this context? It is quite simple, CRDs pave the way for the end user to declare the desired state of the application that the operator has to maintain. End user will create / update a CR declaring the spec of the desired state, and then through the control loop, the operator will try to bring the application to that desired state. It will also report back the status of the application to the same CR.
Does this mean, we cannot write a similar operator to a stateless application? You absolutely can. However, in that case this should rather be considered as a controller not an operator. Also, if you are thinking to write such a controller to manage the lifecycle of a stateless application, it would rather be an overkill because there should be other means to achieve the same thing just using the native Kubernetes API without having to extend it with a whole new set of CRDs. Or this could even mean that your actual problem must be lying somewhere else, and you are misusing the operator pattern simply because you either want to use a Kubernetes object to handle a non-kubernetes resource in your application or you are trying to fix a configuration management problem through it.
Any code is a liability
There are many frameworks available now to make the implementation of an operator an easy task. However, it still requires a certain effort from the developers to understand the complexity of the application and the actual business requirement you need to address through the operator’s control loop.
It is also seen now-a-days that certain teams, organizations develop operators for 3rd party open-source applications that are not owned by these teams or organizations. The logic written into an operator is an imperative workflow that couples your application’s business logic into the Kubernetes control loop. This may initially not look concerning to anyone, however the team that develops the operator will eventually be responsible to maintain the operator codebase to support the potential changes in the actual application that can impact its deployment lifecycle. Even though writing an operator is not a difficult task now with the availability of different frameworks, it will still be something more complex than writing a piece of configuration to support deployment lifecycle with the use of native Kubernetes resources.
Also, unless you have a clear understanding of the potential changes coming into the particular application you are writing the operator for, you are putting yourself in a position where you have to invest a dedicated time and effort just to maintain the operator codebase to adhere to those changes from time to time, as they come.
This is the reason why it is recommended to leave the decision of writing an operator to the application owners or at least get enough understanding of the future changes the application may face, before starting writing an operator yourself for that application.
In either case, it is better to make yourself liable for a piece of configuration that you can easily modify than to make yourself liable for an entire codebase of an operator. So, it is a wise decision to seek alternative ways that you can handle complexities in the lifecycle of an application rather than writing a piece of code that works as an operator and making yourself liable to it, especially if you do not own that application.
Resource Concerns: Operators are not exactly part of your actual workload
In certain cases, you may have to run your application in a resource critical cluster. When you have an operator to manage the lifecycle of this application, the operator itself will require a certain level of resources (cpu, memory, network) allocated towards it from the same cluster where you run the workload.
What we expect from an operator is to maintain the lifecycle of a given number of application instances, by reconciling their state to a desired state, as specified by the user through the custom resource. Therefore, operators are a part of your control plane rather than the workload itself. Now imagine a situation where you must provision a large number of application instances.
This means a few things,
- You will have to create an equal number of custom resource objects.
- The operator or operator instances will be running reconciliation loops to handle the state of all the application instances represented by each individual custom object. This could be resource intensive operation.
- You are using the Kubernetes etcd store to keep track of all of the custom objects and your operator will be communicating with Kubernetes API quite frequently.
- On all the other occasions where your custom objects are in the desired state, the operator will sit idle, but it still requires some resources from the cluster.
As you can see, having an operator to manage such a large number of custom objects could impact your cluster resource wise, in multiple ways.
Therefore, when you want to run your stateful application in a Kubernetes stack, writing an operator may sound appealing but you must remember the impact it may have on your cluster resources which is primarily meant for running your workload.
Security Concerns: Is it worth running an operator with elevated privileges for the duration of your application?
This is one of the reasons why writing an operator should be your last resort. An operator is a highly privileged entity compared to your actual application.
If we take a step back and consider what your operator does, all it does is maintain the state of your application instances to a desired state as specified by the CR. For doing so, it requires a certain level of privileges to your Kubernetes API. Now depending on the type of resource objects the operator is supposed to manage, you can grant permission to specific Kubernetes resources in either “namespace” scope or “cluster” scope. In most cases, what we have seen in certain existing open-source operators is that they sometimes get deployed with RBAC permission to your cluster resources than they require.
Nevertheless, what is important here is that the operator will be running with all those permission to your Kubernetes cluster for the duration of your application, even when an actual reconciliation of the custom resources happens occasionally. So, the amount of time the Kubernetes Operator will require these permissions to carry out its functionality is only a fraction compared to the time it will actually be running.
Considering these aspects of an operator, if you are thinking of writing one that will most probably do one-off tasks or tasks that will happen in a less frequent window (e.g., backup, restore), it could be worthwhile to analyze the possibility of using jobs or cron-jobs available in the standard Kubernetes API than investing all your effort to build a complex piece like a Kubernetes operator.
Configuration Management: Operators should not be a solution to your configuration management problem
Something that we have seen in common in most operator implementations is that it helps end-users to manage the application configurations through a well-structured object like a custom resource. A custom resource has a predefined schema, managed through a custom resource definition (CRD). This CRD will be a dedicated one for the application, so validating the inputs a user can provide to configure the application state is more controlled and streamlined. When you consider the standard ways that a user can pass application configurations, it is either using a configmap or a kubernetes secret which are more generic approaches.
A frequent implementation pattern that we have seen is that certain operators are sometimes implemented to just make use of this structured configuration that can be achieved using a CRD. These operators mainly target to expose the application configurations via a CRD, so there is a control over the user inputs. They should rather be called as controllers than operators because they do not essentially do anything specific to handle any application state related activities during the application lifecycle. While anyone is free to write a piece of software that is meant for handling configurations of an application through a CRD, it also may be an overkill. Because there is much more generic tooling available in the Kubernetes ecosystem, to achieve the same thing. For example, for someone using helm to manage a deployment and lifecycle of an application, certain features such as “--verify”, or json-schema validation are available to validate the user inputs which can eventually be mapped to a generic resource such as a configmap.
Question we should ask here really is, “is it really worth writing an application specific piece of software to manage the configuration, when there is much more generic tooling available to specifically address configuration related issues of applications deployed on Kubernetes?”
These are the key areas that we would like to think a devops engineer or a software developer should consider, before starting to write a Kubernetes operator. We would like to end this article with the following note, “The fact that it is possible to write a Kubernetes operator as a solution to a given problem does not always mean you should write one.”
References:
https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
https://thenewstack.io/kubernetes-when-to-use-and-when-to-avoid-the-operator-pattern/
https://sdk.operatorframework.io/docs/best-practices/best-practices/
Written by Nayanajith Chandradasa.
Introduction to Private 5G Networks
A Private 5G Network is an enterprise-dedicated network tailored to deliver the latest advancements of 4G LTE and 5G technology and drive the digital transformation of businesses and organisations across industries.
Private 5G provides secured communications with high bandwidth, low latency, and reliable coverage to connect people, machines, and devices. Private 5G solutions are ideal for IoT-intensive applications like intelligent manufacturing and sensitive environments like ports and banks.
More securely and efficiently than public 5G and LTE networks, private 5G offers solutions that provide connections to authorised users only within the enterprise and processes the generated data locally, in isolation from the public network. Easy to deploy, operate, and scale to meet all operational needs.
Spectrum, Coverage and Speed
The range of a private 5G network can vary from a few thousand square feet to thousands of square kilometres, depending on the power of the radio transmitter, the band used, and the user's requirements. A typical 5G radio that operates on low, middle, and high bands provides the following frequency ranges:
Deployment Scenarios
The 5G system is disaggregated into independent components known as "network functions" (NF) that communicate through a standard API. These NFs are accountable for the operation of mobile networks, including authentication, IP address allocation, policy control, and user data management. Service-Based Architecture makes the 5G system very flexible and able to add new services and applications to meet the needs of any industry.
Control and User Plane Separation adopted in the 5G architecture allow operators to separate the 5G control function from the data forwarding function. For example, the control plane can be deployed centrally, whereas the user plane function (UPF) could be deployed flexibly at any location within the network to accommodate the various data processing requirements.
The private 5G network architecture can be deployed in different scenarios to meet each customer's needs. We can categorise the deployment based on the level of isolation and integration with the public network into three scenarios as follows:
Enterprise hosts and operates the 5G network (complete set: gNB, UPF, 5GC CP, UDM, MEC), where the network is physically isolated from the public network. Despite the high cost associated with deploying this scenario, it guarantees complete data security, reduces the likelihood of a data breach, and provides ultra-low latency connections.
A shared private 5G network scenario uses an operator's public network to reduce installation costs. Based on the business needs, the customer can choose the proportion of components they host and manage locally, and the elements will share with the mobile carrier.
MEC and UPF may be deployed on-site on premises like smart factories, stadiums, and cinemas, enabling a private 5G architecture with minimal latency and future changes. In addition, the business owner can control the radio access network locally to allow quick and reliable connection (RAN).
Depending on the application’s requirements, a Radio Access Network (RAN) may be installed on-site and connected to the public network via a dedicated data slice that provides private 5G service.
Private 5G Use Cases
Zinkworks provides a Networked OT Orchestration platform purpose-built for Industry 4.0 and private 5G. Customers can use various automation solutions and ML-based prediction models developed by Zinkworks to monitor the network's performance and manage network resources more efficiently and securely. In addition, customers can create policy and service profiles with customised bandwidth, latency, and quality of service (QoS) to meet every application's needs.
Written by Mohamed Ibrahim.
RxJava Reactive Streams
Introduction to Reactive Streams
Before we talk about what RxJava
is and fully understand it we must first comprehend some concepts and principles that are behind the creation of the API. In reality, RxJava is just part of a broader project called ActiveX which applies the concepts that will be explained here not only for Java but also to other platforms such as Python, Go, Groovy, C#
, and many others. It is worth mentioning that ActiveX is not the only one to implement these ideas. Spring Boot Framework also has its own implementation and is called Spring WebFlux (result of the Spring Project Reactor).
Reactive Stream is an initiative to provide a standard for asynchronous stream processing with non-blocking backpressure.
— reactive-streams.org
RxJava
as well as WebFlux are implementations of the Reactive Streams. But what exactly does this statement above mean? Traditional methods when called normally get blocked until it finishes whatever it needs to do. If the method is doing only mathematical calculations or checking some logic out of their arguments the non-blocking nature of the asynchronous stream processing will not make much difference, but if we start talking about accessing the file system, save a file to some device, read information from service, or communicate to a microservice remotely that is when things start to get interesting.
A scenario particularly interesting for Reactive Streams is in a microservices environments such as cloud environments. In such architectures we have many services talking to each other and every time this communication takes place the service that initiated it will need to wait for some time until it takes some action. On top of that, the agent providing the service usually does not respond to a single client but to multiple ones. It is in this sequence of events that Reactive Streams excel!
Reactive Streams solves the problem effectively by using something called Event Loop. Every time a new request comes to the Reactive Streams the thread used by the method does not get blocked. Just after it executes the request it goes and does something else, it does not wait. Only when request is done the Event Loop adds to the queue this new event and the next available thread is processed which means, no wasted resources! The usage time of every thread is used to the fullest.

Fig. 1 – Reactive Event Loop
The no-reactive method needs to instantiate a new thread every time a new request is made which means that if you have too many simultaneous requests you can end up with multiple threads sitting there just waiting, doing nothing and consuming resources.
Last but not least, reactive streams must support backpressure. This means that the receiver (Subscriber)
of a reactive stream can control the number of events it is able to process. This is useful in cases where the sender (Publisher)
produces more events than the receiver can handle, and backpressure is a mechanism to allow the sender to slow down the event generation in order to allow the receiver properly to process them.
Reactive Streams can be considered an evolution of the well-known observer pattern plus the addition of functional paradigm bringing to the mix a very powerful API. This API allows for the creation of a chain of methods bringing a very declarative style of programming as well as abstracting out low-level threading, synchronization, thread-safety, concurrent data structures, etc.
Reactive Reference Implementation
As mentioned, it is not only the ReactiveX project, more specifically RxJava
, that implements the Reactive Streams standards which means that you are going to find similar structures and elements in different projects although using distinct names depending on each project.
At a very high level every Reactive Stream implementation has a Publisher
, the entity that produces the data to be consumed by a Subscriber
. Another important architectural element is the Subscription
. The Subscription
represents the message control link between Publishes and Subscribers itself by which it gives the Subscriber
the capability to inform the Publisher
how much data it can handle, in other words the entity that makes backpressure possible. In addition to that, between Publisher
and Subscriber
, we normally also have a chain of functions, known as function chain. It is through this chain of functions where all sorts of operations are applied over the streams such as Map, Filter, FlatMap
, and many more.

Fig. 2 – Reactive Streams Base Classes
Keep in mind that Reactive Streams has its style on the bases of the Functional Paradigm, and therefore, having the knowledge of concepts such as immutability, pure functions, high-order functions, etc., is essential to fully understand the RxJava
and properly use its API.
Some RxJava Code at Last
I know there is a lot of information to absorb before the first lines of code, but trust me, what I presented before will save you from a lot of trouble when developing a Reactive Functional Programming API such as RxJava
.
Hello World
package rxjava.examples;
import io.reactivex.rxjava3.core.*;
public class HelloWorld {
public static void main(String[] args) {
Flowable.just("Hello world").subscribe(System.out::println);
}
}
Looking at this simple hello-world code might seem odd for someone used to working with traditional Object-oriented programming (OOP) only, but now that we have set the scene for the Reactive Functional Programming on the previous sections it will be much easier to understand what is going on here.
The first thing to note is the use of the Class Flowable
. It is important to remember that here everything is a constant infinite flow of data or stream, and the Flowable
class represents exactly that. Even to print a single String you need to somehow provide it through a stream. In such cases, Flowable
gives the just
method that gives an Observable object with just one item. You can think of Observable
as a Publisher
class mentioned before. This means that you need to subscribe to a Subscriber to read what is coming from the streamer. Here the subscriber simply prints out whatever is coming from the stream.
This API has much more power and flexibility than shown in this simple hello-world example, but it is when dealing with millions of data that Reactive Streams approach really shines.
Of course, there is a lot more to talk about regarding RxJava
I have barely scratched the surface here. Apart from Flowable
and Observable base classes there are also Single, Completable and Maybe base classes to deal with more specific situations that I haven’t even touched on here in this article.
Talking about everything RxJava
is able to do would take many more pages, not a simple article like this one. The goal here is to just give a high-level overview of RxJava
, the main concepts behind any Reactive Streams application as well as about Reactive Functional Programming paradigm.
Final Thoughts
I hope to further explore the RXJava API but this article explains the basics for any Reactive Stream which should enable the reader to quickly understand any implementation of the Reactive Streams.
Also, this article does not present examples on how powerful Reactive Streams standard is over the traditional blocking approach. To give the readers of this article an insight of its power I conducted a small experiment where I implemented a very simple REST service using Reactive Streams versus a traditional blocking one, and the result was pretty impressive.
For closure, I will leave the reader to take their own conclusions based on the result graphics of this experiment:

Fig. 3 – Traditional Blocking API Results

Fig. 4 – Reactive Streams API Results
Code Reference:
- https://github.com/neueda/java-blocking-microservice-chassis
- https://github.com/neueda/java-reactive-microservices-chassis
References:
Written by Berchris Requiao.
Zinkworks Apprenticeship Programme – Ingrid’s Experience
Zinkworks apprenticeship programme is in association with the Fastrack into Information Technology (FIT) programme.
Ingrid joined Zinkworks through the FIT apprentice program with Athlone Training Centre in 2021 and is about to begin her final semester. Here she describes her experience at Zinkworks thus far.
“I am making a career transition and balancing studies, work, and family time. I am a participant in the FIT apprenticeship program at Zinkworks as well as a master’s degree student in Mobile Development at PUC-Minas Brazil. Zinkworks has allowed me to put into practice everything I studied in college and the FIT course.
Since starting at Zinkworks I had a basic understanding of C and C++, but I have been learning Java on my own with the help of my colleagues. My knowledge was used within a large and complex system, and I was mentored by members of my team who gave me effective solutions, taught me how to resolve errors, and about dependencies, management, and automation tools. Team members helped me understand the project and how to find solutions to bugs during the project... As a beginner, I found the start difficult, but with everyone around me helping, I felt confident that I could make mistakes and try again.
FIT and Zinkworks encourage their apprentices to acquire industrial IT certificates during their studies. I chose to obtain the Introduction to Programming Using Java certification from Microsoft at the beginner level because I had Java experience within the company. To prepare for the certification, I used video courses at O'Reilly a subscription service that Zinkworks provides.
I’m starting to study for a Javascript Specialist certification at Intermediate Level. I don't work with Javascript currently, but my studies are more intense. Javascript can be a little challenging for some developers. Javascript is a language that can be used with a variety of frameworks and technologies. It's challenging but I love it! I've done some projects for college as my dream career is Mobile development, I must learn Frontend too. Access to O'Reilly has allowed me to read Javascript books, take some courses, practice the old questions from IT Exams and study with official CIW material.
I am very lucky to be part of Zinkworks, and FIT supports me in my professional growth. I work with many people who are willing to help and show me where I can improve every day. Thank you to everyone who is supporting me and helping me to grow every day.”
Ingrid is one of three apprentices in their final year of study. This year, in 2022, six more apprentices have joined Zinkworks and begun their work experience as part of the FIT apprenticeship program.
To find out more about the apprenticeship programme contact us at info@zinkworks.com.
Click here to learn about careers at Zinkworks.