How Generative AI is Shaping the Future of Business
In today's rapidly evolving technological landscape, artificial intelligence (AI) continues to drive innovation across various industries. One of the most exciting and transformative branches of AI is Generative AI. This cutting-edge technology has the potential to reshape the way businesses operate, create, and innovate. In this blog, we'll delve into what Generative AI is, its incredible capabilities, and how it can benefit businesses. We'll also introduce you to Zinkworks and their suite of Generative AI-powered products designed to revolutionize your business.
Understanding Generative AI
Generative artificial intelligence, or Generative AI for short, is a subdivision of AI that focuses on creating content autonomously. This content can take various forms, such as text, images, music, and more. What sets Generative AI apart is its ability to learn from existing data and then generate new content that mimics the patterns and structures found in the training data.
Generative AI leverages generative models, which are neural networks designed to generate data. These models can be trained on vast datasets, allowing them to understand and replicate the intricate details of the input data. As a result, they can produce remarkably realistic and coherent output.
Most people's first experience of generative AI was through OpenAI's ChatGPT, which reached 100 million monthly active users in just two months after its launch in late 2022. This makes it the fastest-growing consumer application in history. We are still in the very early stages of this massive technological revolution as seen below.
The Power of Generative AI for Businesses
Generative AI is not just a technological marvel; it's a game-changer for businesses across industries. Here are some keyways in which Generative AI can benefit companies:
- Accelerated Innovation
Generative AI can significantly speed up the innovation process. With the help of tools like Zinkworks' rApp Studio, business users can create applications without writing a single line of code. This means that domain experts can focus on their core competencies and rapidly bring their ideas to life, driving innovation at a breathtaking pace.
- Enhanced Productivity and Security
Developers can take advantage of solutions like Zinkworks' F.A.S.T. to streamline the integration of open-source software packages. By automating package compatibility checks and suggesting validated alternatives, F.A.S.T. improves developer productivity, reduces security risks, and accelerates time-to-market.
- In-Depth Market Insights
Market Analysis Reporter (MAR), another Zinkworks product powered by Generative AI, empowers businesses to make data-driven decisions with ease. MAR can instantly generate comprehensive customer intelligence reports, providing real-time insights into competitors, markets, and customers. This centralized intelligence platform enhances efficiency, accuracy, and collaboration among teams.
The Future of Generative AI
The future of Generative AI is incredibly promising. As AI algorithms and models continue to advance, we can expect even more impressive feats. Here's a glimpse of what the future may hold:
- Personalization at Scale
Generative AI will enable businesses to deliver personalized experiences at scale. From tailored marketing content to individualized product recommendations, AI will make customer interactions more meaningful and effective.
- Creative Content Generation
We can anticipate AI-generated content becoming even more sophisticated. Whether it's generating artwork, music, or written content, AI will play a pivotal role in creative industries, potentially blurring the lines between human and AI-generated art.
- Increased Automation
Generative AI will automate complex tasks across various domains, from healthcare and finance to manufacturing and logistics. This automation will lead to greater efficiency, reduced costs, and improved accuracy.
Unlocking the Potential with Zinkworks
If you're eager to harness the power of Generative AI for your business, Zinkworks is here to help. Our suite of products, powered by Generative AI, can revolutionize the way you operate and innovate. Learn more about our Generative AI applications: https://zinkworks.com/gen-ai/
In conclusion, Generative AI is ushering in a new era of possibilities for businesses. It's a technology that empowers innovation, enhances productivity, and offers insights that were once unimaginable. With Zinkworks' Generative AI-powered products, your business can stay ahead of the curve and thrive in this exciting AI-driven future. The time to embrace Generative AI is now, if you would like to speak with one of our team to learn more email us at marketing@zinkworks.com
Zinkworks Inaugural Hackermonth: A Month of Innovation
Over the month of October, Zinkworks hosted its very first Hackathon. The Hackathon, termed Hackermonth, launched on the 1st of October 2023. Over the course, participants developed ideas, generated prototypes, and pitched their presentations to the company on Friday, 27th of October.
The Hackathon: An Innovation Marathon
A Hackathon is a company-wide event that anyone can participate in. The organisers propose a theme and participants develop ideas and prototypes based on that theme. Hackathons can span a single evening, a weekend or a couple of weeks and are opportunities for people to get together, network and, most importantly, have some fun. Here is an example of a Hackathon here.
Zinkworks laid out three themes for the Hackermonth, given a services company with many diverse areas of expertise, it was fitting to format themes to reflect these areas. Spot & solve, would identify a problem in each domain and create a solution. Cross-Pollinate takes a familiar solution and apply it to a new domain. And Pick’n’pitch would take an idea from a pre-defined list of problem statements we provide and devise a solution to it.
Why Zinkworks Embarked on a Hackathon Adventure
Firstly, Hackathons are a big part the software industry. Many Zinkworks employees attended Hackathons in previous companies and saw them as a great opportunity to make new relationships and get together with their colleagues outside of work.
Secondly, Zinkworks recently setup a cross-functional innovation team called The Foundry. This team incorporates Product, Engineering, R&D, and Marketing to generate new business ideas expand its footprint. The best innovation teams are only as good as the people around them, so the team recognised that involving all employees in Zinkworks early on was crucial to success. The Hackathon was the first big step to bring people through the idea process and give them a taste for innovation work.
Zinkworks’ Hackermonth Highlights
Sparking Ideas
The Hackermonth kicked off with an Ideation workshop. 22 people joined the online workshop to brainstorm and develop ideas to pitch at the end of the Hackermonth. There were some great (and some wacky) ideas. Here are some to name but a few:
- Christopher came up with an AI Market Trend Analyzer that curated articles based on user-defined categories and deliver them straight to your inbox.
- Jakub wanted to create a tool that supported database tuning for optimal performance. His initial research indicated that current solutions such as pgtune could be much more effective.
- Aleksei's idea was to build a dashboard and analytics tool to predict fuel supply based on information such as weather conditions.
Crafting Prototypes
The 2nd workshop brought participants through developing a prototype for their specific idea.
They crafted vision statements by writing down important details such as key users, type of industry, problem-solution fit and uniqueness. Then used ChatGPT to develop a quick prototype using the preferred programming language such as bash or python.
The Elevator Pitch
The final workshop focused on developing everyone's pitches. This workshop explored what not to do when presenting and looked at some of the best practices when pitching an idea such keeping it brief, not looking at the slides, and practicing as much as possible.
Pitch Day Revelations
On Pitch Day, all participates gathered at Zinkworks’ Athlone office on Friday 27th October to kick-off the pitch. We had the privilege of seeing the great ideas that competitors put hours of work into during October. And as usual, pizza eclipsed the evening.
Clevermiles - James McNamara
James presented a project that monitors a person’s driving habits and records their behaviour in relation to speeding, swerving, braking, and accelerating. Good behaviours bag rewards which in-turn encourages more good behaviour. The idea is largely applicable to the insurance industry and it's something that James has extensively explored in the past.
Zinkworks Tool Directory - Paul O'Gara
Paul developed an idea to create a tools directory for the company. This portal displays the core set of tools used in the company and contain functions to collaborate and share personal tools and tips to other people. ZTD provides a solution to the problem of having lots of tools and ideas but nowhere to share them with colleagues.
Kata Generator - Tristan Gutierrez
Tristan’s Kata Generator helps software developers with end-to-end test generation. Using generative AI, Kata Generator takes the user’s code and generates scenarios to help the developer create effective Karate tests. This lowers the bar for anyone starting with the Karate framework and greatly increases the speed of test generation to build better tests faster.
JournAI - Team Bounty Hunters
Team Bounty Hunters are Zinkworks' 2023 apprentice team. They developed an impressive application that uses generative AI to suggest alternative routes based on the user's interests. For example, someone interested in Castles could use JournAI to create a journey that passes by Irish castles and monuments.
Auto AI - Krishan Ravisanka
Auto AI is a personalised car assistant, using the huge technological potential of the car's software and turning it into a personal device. Auto AI provides end-to-end insights for the user on their journey, from the moment they enter the car to the moment they park.
Edge Runner - Rohit Raveendran
Rohit's Edge Runner is a concept that uses the collective power of small devices (such as Raspberry Pis) in each location to run Natural Language models. Using devices that are connected to edge networks reduces the need for extensive cloud-based infrastructure. This concept is applicable to smart home applications and IoT device clusters that want to use the full potential of NLP.
Winners
The Hackermonth had 3 categories of winners. Best Concept was awarded to James’ Clevermiles for its originality and problem-solution fit. Best Prototype was awarded to JournAI by Team Bounty Hunters for their degree of functionality and alignment with their concept. Finally, the Overall Winner prize was awarded to Tristan's Kata Generator for the fantastic quality of his pitch, live demo and innovativeness using the latest AI technologies.
Zinkworks Nominated for Two Innovation Technology AtlanTec Gateway (itag) Awards
Zinkworks is proud to announce that it has been nominated for two awards at the 2023 ITAG Awards.
The company was nominated for the Digital Project and Innovation Award, and the Best Application of AI.
- Digital Project and Innovation Award Nomination – The Foundry
Zinkworks established an innovation team, called the Foundry, combining talents across the company, including the Product Director, Research & Development, and Marketing team. The Foundry is a cross-functional team that acts as a support mechanism for our Sales team, gathering information and developing prototypes to leverage new business opportunities. This cross-functional knowledge has enhanced communication and allows for instant feedback on initiatives from each team member. It also streamlines execution, which is required when dealing with multiple clients at the same time. - Best Application of AI Award Nomination – The AI rApp Builder
The AI rApp Builder is a revolutionary product that utilizes a Generative AI framework developed by the Foundry. This framework allows business requirements to be translated into usable applications instantly, something that was previously impossible. Now, thanks to the power of Generative AI, business-facing professionals can create applications with ease. The development of the AI rApp Builder was a team effort, and the nomination is a testament to the hard work and dedication of everyone involved, including Aaron Fortune, Jatin Marwaha, and Bertille Leveque. Their efforts have resulted in a product that is innovative, efficient, and user-friendly.
“We are delighted to be nominated for these two ITAG Awards,” said Paul Madden CEO of Zinkworks. “These nominations are a testament to the hard work and dedication of our team and our commitment to providing the latest innovations to our clients.”
Zinkworks looks forward to attending the award ceremony on Friday, 10th of November 2023 and thanks ITAG for recognizing their work and providing them with a wonderful opportunity. For more information about Zinkworks, visit www.zinkworks.com or for information about Innovation Technology AtlanTec Gateway (ITAG) visit https://itag.ie/.
About Zinkworks
Zinkworks provides turnkey development services, headquartered in Athlone, Ireland. They utilise the latest innovative technologies to bring industry-leading expertise to their clients, primarily in the Telecommunications and Financial services sectors. Zinkworks are adept at developing custom innovations that streamline their clients’ workflows and improve operational efficiency. With a commitment to quality and customer satisfaction, they have earned a reputation as a trusted partner for businesses seeking transformative software services.
Zinkworks is now a qualified Google Cloud Partner
We are thrilled to announce that Zinkworks is now a proud member of the Google Cloud Partner Advantage program. This exciting collaboration marks a significant milestone in the Zinkworks journey.
Zinkworks, headquartered in Athlone, Ireland, is a custom software development provider for complex and scalable projects, primarily in the Telecommunications and Financial service sectors. This partnership with Google Cloud is a testament to Zinkworks commitment and expertise in cloud computing.
James McNamara, Director of Product and Innovation said, "We are excited to join forces with Google Cloud. Becoming a Google Cloud Partner helps us to provide our clients with the most advanced solutions available. This is further confirmation of Zinkworks expertise in cloud technology, and we look forward to helping our clients create new innovations through Google Cloud.
As a Google Cloud Partner, Zinkworks will have access to a wealth of resources and training provided by Google. This partnership will enable Zinkworks to deliver even more robust and innovative solutions to their clients, leveraging the power of Google Cloud’s cutting-edge technologies.
To learn more about Zinkworks becoming a Google Cloud Partner and how Zinkworks can help your business adopt and benefit from Google Cloud visit: https://cloud.google.com/find-a-partner/partner/zinkworks
About Zinkworks
Zinkworks provides turnkey development services, headquartered in Athlone, Ireland. They utilise the latest cutting-edge technologies to bring industry-leading expertise to their clients, primarily in the Telecommunications and Financial services sectors. Zinkworks are adept at developing custom innovations that streamline their clients' workflows and improve operational efficiency. With a commitment to quality and customer satisfaction, they have earned a reputation as a trusted partner for businesses seeking transformative software services.
About Google Cloud
Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
Building Visual Dashboards in the Cloud
Big data is revolutionising how businesses operate, transforming decision-making processes across industries. Cloud technology has emerged as the catalyst to empower organisations, regardless of size, to effortlessly manage vast volumes of data. This technological advancement not only slashes the maintenance overheads linked to massive datasets but removes the necessity of recruiting and training specialised IT professionals. Even when dealing with large amounts of data, businesses can leverage cloud platforms' scalable computational and analytical capabilities to create engaging dashboards and uncover valuable new insights. Let's explore the challenges and steps we need to consider when dealing with large data and machine learning models.
Aggregating Big Data for Dashboards
As datasets are becoming increasingly complex and challenging to understand, it has become vital for decision-makers to rely on data visualisation as a means of simplifying and interpreting large amounts of valuable information. Utilising this data, businesses can effectively understand difficult concepts, identify emerging patterns, and gain data-driven insights to make informed decisions.
For instance, BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. Use built-in ML/AI and BI for insights at scale. Once the data resides in BigQuery, SQL queries come to the rescue; you can perform data analysis and get insights from all your business data. This allows companies to make decisions in real-time, streamline business reporting, and incorporate machine learning into data analysis to predict future business opportunities.
Applying Insights through Visual Best Practices
Now having aggregated data sets, it’s time to focus on effective and intuitive visualisations. Visual best practices are key to developing informative visualisations that drive the audience to act. Here are some of the best practices for data visualisation.
- Know your audience and purpose: The best dashboards are built with their intended audience in mind. Ask the question, “Who am I designing this for?” and continue to check that the dashboard supports their business goals and encourages exploration.
- Consider display size: Where will the audience be viewing the dashboard the most? Desktop or mobile? Do some research upfront to inform your design and adjust accordingly for the best experience.
- Plan for fast load times: Optimise dashboards for faster load times, which can contribute to better engagement.
- Leverage the sweet spot: Always consider how your audience will “read” the dashboard. The dashboard should have a sensible “flow” and a logical layout of different pieces of information.
- Limit the number of views and colours: Consider using colour only if it enhances the analysis. Sometimes, too many colours can slow or even prevent analysis.
- Select useful information only: Do not overload the user with too much information. It is important to pick only data that will be useful and raise interest to the reader. Simplicity brings clarity.
- Add interactivity to encourage exploration: The power of dashboards lies in the author’s ability to queue up specific views for side-by-side analysis. Filters supercharge that analysis and engage your audience.
- Test the dashboard for usability: An important element of dashboard design is user testing. After building a prototype, ask your audience how they use the dashboard and if it helps them answer their pressing questions.
An example of a visual dashboard created by Zinkworks:
Scalable Performance on the Cloud
Scalability is an issue in data analytics, hosting your dashboards on a cloud platform ensures scalability and responsive user interfaces, even under heavy loads. Serverless platforms like Google Cloud Run can instantly scale dashboards to accommodate thousands of users with configured autoscaling.
Content Delivery Networks (CDNs) play a vital role in optimising dashboard delivery. By caching dashboard images and data close to users, CDNs enhance the user experience. Additionally, integrations with cloud monitoring tools enable you to track dashboard usage patterns and performance, helping you identify and address potential bottlenecks promptly.
Conclusion
Harnessing the scale, flexibility, and integrations offered by cloud platforms, organisations can effectively navigate the complexities of large datasets and deliver valuable insights rapidly through visual dashboards. With the right combination of data warehousing, analytics, and hosting, the cloud serves as the indispensable foundation for businesses seeking to thrive in the era of data-driven decision-making.
If you would like to learn more about building dynamic cloud-based visual dashboards, please contact our Zinkworks team by emailing: marketing@zinkworks.com
Written By: Mahdi Khosravi & Sharmistha Sodhi
A Guide to Working with Polygons in React Using Mapbox
Since my university days, I have been passionate about exploring and working with maps. It all started with my final year project, which involved developing a location-based social network. Recently, I landed a position at Zinkworks where I was presented with an intriguing opportunity to create and analyse polygons in real-time for network statistics using Mapbox in React. Although it was a thrilling and rewarding experience, it also posed some challenges. Specifically, I encountered a dearth of resources and information on the advanced features of working with Polygons in Mapbox with React. So, I decided to share my practical experience by creating an informative blog.
What are Polygons?
Polygons are a vital instrument in the field of mapping and geospatial analysis, as they aid in communicating information regarding the spatial relationships among diverse features within a specific area. A polygon is a two-dimensional shape consisting of multiple straight-line segments that form a closed loop. Polygons can represent various geographic features, such as land parcels, bodies of water, and administrative boundaries.
In addition to representing geographical features, polygons in maps can also be used for data visualisation and analysis. They are often used in geographic information systems (GIS) to display and analyse data that is spatially referenced. For example, Polygons have the potential to serve as a visual tool to depict the perimeters of various networks along with their respective coverage areas. Additionally, they can facilitate the analysis of network utilisation in real-time, providing insights into the performance of each network.
Polygon Utilisation in Zinkworks NDO
Zinkworks is a cutting-edge technology firm that is at the forefront of utilising innovative tools and techniques to streamline network monitoring. Zinkworks' product, Network Device Orchestrator (NDO), uses polygons to analyse network usage in real-time, enabling the orchestration of connected equipment, and predicting and avoiding demand spikes before they impact the network. Moreover, NDO predicts the future position of the connected equipment and its future capacity needs, and with polygons it presents that data in a user-friendly UI. By leveraging advanced features of polygons in Network Device Orchestrator (NDO), Zinkworks has been able to enhance its network utilisation analysis capabilities, empowering enterprise users to align their internal business rules to the available network capacity.
How to use Polygons using Mapbox in React?
Let's install the dependencies first:
`yarn add mapbox-gl react-map-gl @types/react-map-gl`
And add the following code:
import { GeoJSONSourceOptions } from 'mapbox-gl';
import { Layer, Map, Source } from 'react-map-gl';
const polygons: GeoJSONSourceOptions['data'] = {
type: 'FeatureCollection',
features: [
{
id: 1,
type: 'Feature',
properties: {
name: 'Area 51',
},
geometry: {
type: 'MultiPolygon',
coordinates: [
[
[
[-7.950966710379959, 53.43146017152239],
[-7.950966710379959, 53.42962140120139],
[-7.947024882898944, 53.42962140120139],
[-7.947024882898944, 53.43146017152239],
[-7.950966710379959, 53.43146017152239],
],
],
],
},
},
],
};
And add the Map component:
<Map
mapboxAccessToken={“MAPBOX_ACCESS_TOKEN_HERE”}
mapStyle='mapbox://styles/mapbox/satellite-streets-v9'
style={{
width: '100%',
height: '100%',
}}
initialViewState={{
longitude: -7.949333965946522,
latitude: 53.4313602035036,
zoom: 15
}}
>
<Source id='source' type='geojson' data={polygons}>
<Layer
{...{
id: 'data',
type: 'fill',
paint: {
'fill-color': 'rgb(5, 191, 5)',
'fill-opacity': 0.4
},
}}
/>
</Source>
</Map>
Let’s run the code and check if a polygon is displaying on the map like the one below:
Let's go through a few essential properties:
id: a unique id to identify each polygon, this can be used to target polygons like when hovering, etc.
properties: can be used to assign custom properties to polygons and subsequently access them at a later time, such as the polygon's name, value, and other relevant attributes.
geometry: used to define and establish the type of polygon and specify its coordinates.
The Layer component is a fundamental component for rendering diverse items, such as polygons and lines, with a range of unique properties, including fill-color and opacity. For example, you can use Layer to draw an outline for a polygon using the following code:
<Layer
{...{
id: 'outline',
type: 'line',
paint: {
'line-color': 'red',
'line-width': 2,
},
}}
/>
I hope this article was helpful, you can find the source code for different features in the following repo.
Repo URL: https://github.com/Ahmdrza/mapbox-polygons-react
To learn more about Zinkworks NDO solution, visit:
https://zinkworks.com/solutions/
Happy Coding!
Written By: Ahmad Raza
Exploring the Intersection of Industry 4.0 and 5G
As you read the title, I’m sure you have heard these words from a blog or talk before. I bet they feel like the latest series of buzzwords. After reading this article, you will become more familiar with these terms and prove that these are not just empty buzzwords. Industry 4.0 and 5G are two innovative technological advancements that have the potential to change the way we live and work. Industry 4.0 refers to the fourth industrial revolution and is characterised by integrating advanced technologies such as artificial intelligence, the Internet of Things (IoT), and robotics into traditional manufacturing and industrial processes. 5G, on the other hand, is the fifth generation of mobile networks that promises to provide faster and more reliable communication than previous generations.
Industry 4.0 has the potential to revolutionize manufacturing and production processes by integrating IoT, advanced sensors, and AI. This leads to improved production speed, reduced costs, and increased competitiveness. In addition, Industry 4.0 enables companies to collect and analyse vast amounts of data, which can be used to improve the quality of their products and services. With the use of robotics and automation in the production process, highly automated factories can operate with minimal human intervention, 24/7, leading to increased productivity, improved quality, and reduced costs.
The integration of IoT in Industry 4.0 allows for the creation of smart factories, where machines and devices are equipped with real-time sensors that collect and transmit data. This data can then be analysed using AI algorithms, which can optimize production processes, improve quality, and reduce downtime. Furthermore, the use of advanced digital technologies such as 3D printing, augmented and virtual reality, and cloud computing can help companies design, test, and produce products more efficiently and cost-effectively. Despite the potential loss of jobs due to automation and the need for workers to adapt to new technologies and skills, Industry 4.0 is expected to significantly impact the global economy and how we live and work.
5G provides faster and more reliable communication, which is crucial for the successful implementation of Industry 4.0. With 5G, manufacturers can communicate with their machines and devices in real-time, allowing for improved monitoring, control, and automation of the production process. This can lead to increased efficiency and enhanced productivity. The key benefits of 5G include increased speed, reduced latency, and improved capacity, allowing for the connection of many devices simultaneously. This enhanced capacity will be crucial for successfully implementing the Internet of Things (IoT), as it will allow for the connection of millions of devices, from smart homes to industrial equipment. It has the potential to revolutionise industries such as healthcare, education, and entertainment, allowing for the creation of new products and services that were previously not possible.
5G is a significant technological advancement that has the potential to impact the way we live and work. However, it also raises some concerns, such as the need for substantial investments in infrastructure and the potential for security and privacy issues. Despite these challenges, 5G is expected to significantly impact the global economy and provide new opportunities for innovation and growth.
In conclusion, Industry 4.0 and 5G are two technological advancements that can change the way we live and work. Industry 4.0 has the potential to revolutionize manufacturing and production processes, leading to increased efficiency, improved quality, and reduced costs. With the help of 5G, manufacturers can communicate with their machines and devices in real-time, leading to improved monitoring, control, and automation of the production process. Despite the potential challenges associated with these advancements, such as the loss of jobs due to automation, Industry 4.0 and 5G are expected to significantly impact the global economy and how we live and work.
At Zinkworks we have created a product called Networked Device Orchestrator (NDO) which is purpose-built for Industry 4.0. It is designed to enable the orchestration of connected equipment, predicting and avoiding demand spikes before they impact the network, and empowering enterprise users to align their internal business rules to the available network capacity. Learn more: www.zinkworks.com/solutions/
Written by Aaron Fortune.
Zinkworks Welcomes the Ambassador of India, Akhilesh Mishra
Zinkworks was delighted to welcome the Ambassador of India, Akhilesh Mishra, and his wife Mrs Reeti Mishra, to our headquarters in Athlone, Ireland on the 20th of April 2023. During the visit, a tour was given around the Zinkworks headquarters where they learned about the company values and met the team behind its success. The visit was a testament to the company's growing reputation and global reach.
Ambassador Mishra stated “I am extremely delighted to be visiting Zinkworks today. Zinkworks is a great, inspiring, success story of India and Ireland’s partnership. It has been great to meet with so many highly skilled Indian professionals here at Zinkworks. You can really feel a warm, inviting, family feeling from the office atmosphere allowing everyone to integrate. I look forward to coming back again and working together in the future. I wish my Irish friend’s great success on their business connection with India.”
Zinkworks is proud to have been recognized by the India Ambassador and is excited to continue expanding its reach and impact on a global scale. Zinkworks continues to build strong relationships around the world, and they look forward to continuing to work closely with India to drive innovation and create a more prosperous future for all.
Aileen Cramer, COO (Chief Operations Officer) “It was a privilege to welcome the Indian ambassador to our Zinkworks office today and introduce him to our team. The visit was a significant milestone for Zinkworks, as it is a demonstration to the company's growing importance in the global market.”
About Zinkworks
Zinkworks provides turnkey development services, headquartered in Athlone, Ireland. They utilise the latest innovative technologies to bring industry-leading expertise to their clients, primarily in the Telecommunications and Financial services sectors. Zinkworks are adept at developing custom innovations that streamline their clients’ workflows and improve operational efficiency. With a commitment to quality and customer satisfaction, they have earned a reputation as a trusted partner for businesses seeking transformative software services.
Kubernetes Operators: When not to create one
Over the past few years in the realm of DevOps and Kubernetes in particular, Kubernetes Operator pattern has been a trending topic. In my personal experience, while working with different projects for different applications, it was evident that some of the software development teams, or organizations were too quick to jump on the Kubernetes Operator bandwagon, without analyzing the real-world problem they were trying to solve through the implementation of a Kubernetes Operator.
More often than not, the implementation of a kubernetes operator was done only because it was understood as the most trending topic or the implementation pattern in the kubernetes domain at that point. I have seen certain software development teams going to the extent where they implement different wrapper CRDs, or controllers around an open source kubernetes operator, so that certain organizational practices and standards were encapsulated to these wrapper controllers while giving the organizations the ability to deploy and use the open-source Kubernetes Operator in their application stack. If simply put, all that cost, time and effort from the software developers were invested in these kinds of implementations only to give the ability to use a specific, well known Kubernetes Operator in their application stack, while they had completely forgotten why anyone should really have a Kubernetes Operator or what actual problems a Kubernetes Operator is supposed to solve fundamentally in your application.
The purpose of this article is not to criticize the use of Kubernetes Operator pattern or manifest it as a bad practice. It is in fact quite the opposite. Kubernetes Operator pattern is a highly useful concept in a Kubernetes stack, which helps devops engineers or software developers to tackle certain complexities of the application lifecycle while deployed on a Kubernetes platform. What we rather intend to illustrate through this article are the actual real world use cases that Kubernetes Operator pattern is supposed to solve, by first discussing the use cases which might not be applicable for it.
Let us first look at what is meant as an operator pattern in Kubernetes. There are three key points we should aim to suffice through an operator implementation.
- It should be a piece of software that automates a repeatable task of a stateful application and replaces the necessity of a human operator.
- It should be a software extension to the Kubernetes API that makes use of a Custom Resource.
- It should follow the Kubernetes principals, mainly the Control Loop.
While all three points above have an equal importance, we think the most important point out of three is the first one.
Now let us jump into our main topic, when should we not create an operator, or if we try to rephrase it in a more diplomatic way, “When should we rethink our decision to write an operator”.
Is my application a stateful application?
Even though official documentation for Kubernetes on operator pattern does not strictly mention about the application you are building the operator for, to be a stateful application, what we should understand is, an application which does not have a state will not really require some controller to handle its deployment lifecycle. Notice how we have used the term “controller” here instead of “operator”. We will discuss this in a moment.
The simple reason being, if a particular application does not have a state, what it really means is you can pretty much use the native features of Kubernetes such as a deployment controller to handle the full life cycle of that application. But when an application is a stateful application then there is a possibility that you cannot freely replace a given replica of that particular application instance with a new replica (like it is supposed to happen in the kubernetes world all the time).
There is a chance some additional work such as leader elections, handling checkpoints, managing quorums, or restoring a backup is there that must be done when bringing up a new replica. Now, some of these tasks may not essentially be a part of the application code itself, maybe these are some manual steps that a human operator must execute. So, this is where the real use of an operator on kubernetes can pay off well. You can write a piece of software to encapsulate all that domain / application specific logic and then combine it to the Kubernetes API as an extension.
Where do the CRDs and Control Loop come into play in this context? It is quite simple, CRDs pave the way for the end user to declare the desired state of the application that the operator has to maintain. End user will create / update a CR declaring the spec of the desired state, and then through the control loop, the operator will try to bring the application to that desired state. It will also report back the status of the application to the same CR.
Does this mean, we cannot write a similar operator to a stateless application? You absolutely can. However, in that case this should rather be considered as a controller not an operator. Also, if you are thinking to write such a controller to manage the lifecycle of a stateless application, it would rather be an overkill because there should be other means to achieve the same thing just using the native Kubernetes API without having to extend it with a whole new set of CRDs. Or this could even mean that your actual problem must be lying somewhere else, and you are misusing the operator pattern simply because you either want to use a Kubernetes object to handle a non-kubernetes resource in your application or you are trying to fix a configuration management problem through it.
Any code is a liability
There are many frameworks available now to make the implementation of an operator an easy task. However, it still requires a certain effort from the developers to understand the complexity of the application and the actual business requirement you need to address through the operator’s control loop.
It is also seen now-a-days that certain teams, organizations develop operators for 3rd party open-source applications that are not owned by these teams or organizations. The logic written into an operator is an imperative workflow that couples your application’s business logic into the Kubernetes control loop. This may initially not look concerning to anyone, however the team that develops the operator will eventually be responsible to maintain the operator codebase to support the potential changes in the actual application that can impact its deployment lifecycle. Even though writing an operator is not a difficult task now with the availability of different frameworks, it will still be something more complex than writing a piece of configuration to support deployment lifecycle with the use of native Kubernetes resources.
Also, unless you have a clear understanding of the potential changes coming into the particular application you are writing the operator for, you are putting yourself in a position where you have to invest a dedicated time and effort just to maintain the operator codebase to adhere to those changes from time to time, as they come.
This is the reason why it is recommended to leave the decision of writing an operator to the application owners or at least get enough understanding of the future changes the application may face, before starting writing an operator yourself for that application.
In either case, it is better to make yourself liable for a piece of configuration that you can easily modify than to make yourself liable for an entire codebase of an operator. So, it is a wise decision to seek alternative ways that you can handle complexities in the lifecycle of an application rather than writing a piece of code that works as an operator and making yourself liable to it, especially if you do not own that application.
Resource Concerns: Operators are not exactly part of your actual workload
In certain cases, you may have to run your application in a resource critical cluster. When you have an operator to manage the lifecycle of this application, the operator itself will require a certain level of resources (cpu, memory, network) allocated towards it from the same cluster where you run the workload.
What we expect from an operator is to maintain the lifecycle of a given number of application instances, by reconciling their state to a desired state, as specified by the user through the custom resource. Therefore, operators are a part of your control plane rather than the workload itself. Now imagine a situation where you must provision a large number of application instances.
This means a few things,
- You will have to create an equal number of custom resource objects.
- The operator or operator instances will be running reconciliation loops to handle the state of all the application instances represented by each individual custom object. This could be resource intensive operation.
- You are using the Kubernetes etcd store to keep track of all of the custom objects and your operator will be communicating with Kubernetes API quite frequently.
- On all the other occasions where your custom objects are in the desired state, the operator will sit idle, but it still requires some resources from the cluster.
As you can see, having an operator to manage such a large number of custom objects could impact your cluster resource wise, in multiple ways.
Therefore, when you want to run your stateful application in a Kubernetes stack, writing an operator may sound appealing but you must remember the impact it may have on your cluster resources which is primarily meant for running your workload.
Security Concerns: Is it worth running an operator with elevated privileges for the duration of your application?
This is one of the reasons why writing an operator should be your last resort. An operator is a highly privileged entity compared to your actual application.
If we take a step back and consider what your operator does, all it does is maintain the state of your application instances to a desired state as specified by the CR. For doing so, it requires a certain level of privileges to your Kubernetes API. Now depending on the type of resource objects the operator is supposed to manage, you can grant permission to specific Kubernetes resources in either “namespace” scope or “cluster” scope. In most cases, what we have seen in certain existing open-source operators is that they sometimes get deployed with RBAC permission to your cluster resources than they require.
Nevertheless, what is important here is that the operator will be running with all those permission to your Kubernetes cluster for the duration of your application, even when an actual reconciliation of the custom resources happens occasionally. So, the amount of time the Kubernetes Operator will require these permissions to carry out its functionality is only a fraction compared to the time it will actually be running.
Considering these aspects of an operator, if you are thinking of writing one that will most probably do one-off tasks or tasks that will happen in a less frequent window (e.g., backup, restore), it could be worthwhile to analyze the possibility of using jobs or cron-jobs available in the standard Kubernetes API than investing all your effort to build a complex piece like a Kubernetes operator.
Configuration Management: Operators should not be a solution to your configuration management problem
Something that we have seen in common in most operator implementations is that it helps end-users to manage the application configurations through a well-structured object like a custom resource. A custom resource has a predefined schema, managed through a custom resource definition (CRD). This CRD will be a dedicated one for the application, so validating the inputs a user can provide to configure the application state is more controlled and streamlined. When you consider the standard ways that a user can pass application configurations, it is either using a configmap or a kubernetes secret which are more generic approaches.
A frequent implementation pattern that we have seen is that certain operators are sometimes implemented to just make use of this structured configuration that can be achieved using a CRD. These operators mainly target to expose the application configurations via a CRD, so there is a control over the user inputs. They should rather be called as controllers than operators because they do not essentially do anything specific to handle any application state related activities during the application lifecycle. While anyone is free to write a piece of software that is meant for handling configurations of an application through a CRD, it also may be an overkill. Because there is much more generic tooling available in the Kubernetes ecosystem, to achieve the same thing. For example, for someone using helm to manage a deployment and lifecycle of an application, certain features such as “--verify”, or json-schema validation are available to validate the user inputs which can eventually be mapped to a generic resource such as a configmap.
Question we should ask here really is, “is it really worth writing an application specific piece of software to manage the configuration, when there is much more generic tooling available to specifically address configuration related issues of applications deployed on Kubernetes?”
These are the key areas that we would like to think a devops engineer or a software developer should consider, before starting to write a Kubernetes operator. We would like to end this article with the following note, “The fact that it is possible to write a Kubernetes operator as a solution to a given problem does not always mean you should write one.”
References:
https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
https://thenewstack.io/kubernetes-when-to-use-and-when-to-avoid-the-operator-pattern/
https://sdk.operatorframework.io/docs/best-practices/best-practices/
Written by Nayanajith Chandradasa.
Introduction to Private 5G Networks
A Private 5G Network is an enterprise-dedicated network tailored to deliver the latest advancements of 4G LTE and 5G technology and drive the digital transformation of businesses and organisations across industries.
Private 5G provides secured communications with high bandwidth, low latency, and reliable coverage to connect people, machines, and devices. Private 5G solutions are ideal for IoT-intensive applications like intelligent manufacturing and sensitive environments like ports and banks.
More securely and efficiently than public 5G and LTE networks, private 5G offers solutions that provide connections to authorised users only within the enterprise and processes the generated data locally, in isolation from the public network. Easy to deploy, operate, and scale to meet all operational needs.
Spectrum, Coverage and Speed
The range of a private 5G network can vary from a few thousand square feet to thousands of square kilometres, depending on the power of the radio transmitter, the band used, and the user's requirements. A typical 5G radio that operates on low, middle, and high bands provides the following frequency ranges:
Deployment Scenarios
The 5G system is disaggregated into independent components known as "network functions" (NF) that communicate through a standard API. These NFs are accountable for the operation of mobile networks, including authentication, IP address allocation, policy control, and user data management. Service-Based Architecture makes the 5G system very flexible and able to add new services and applications to meet the needs of any industry.
Control and User Plane Separation adopted in the 5G architecture allow operators to separate the 5G control function from the data forwarding function. For example, the control plane can be deployed centrally, whereas the user plane function (UPF) could be deployed flexibly at any location within the network to accommodate the various data processing requirements.
The private 5G network architecture can be deployed in different scenarios to meet each customer's needs. We can categorise the deployment based on the level of isolation and integration with the public network into three scenarios as follows:
Enterprise hosts and operates the 5G network (complete set: gNB, UPF, 5GC CP, UDM, MEC), where the network is physically isolated from the public network. Despite the high cost associated with deploying this scenario, it guarantees complete data security, reduces the likelihood of a data breach, and provides ultra-low latency connections.
A shared private 5G network scenario uses an operator's public network to reduce installation costs. Based on the business needs, the customer can choose the proportion of components they host and manage locally, and the elements will share with the mobile carrier.
MEC and UPF may be deployed on-site on premises like smart factories, stadiums, and cinemas, enabling a private 5G architecture with minimal latency and future changes. In addition, the business owner can control the radio access network locally to allow quick and reliable connection (RAN).
Depending on the application’s requirements, a Radio Access Network (RAN) may be installed on-site and connected to the public network via a dedicated data slice that provides private 5G service.
Private 5G Use Cases
Zinkworks provides a Networked OT Orchestration platform purpose-built for Industry 4.0 and private 5G. Customers can use various automation solutions and ML-based prediction models developed by Zinkworks to monitor the network's performance and manage network resources more efficiently and securely. In addition, customers can create policy and service profiles with customised bandwidth, latency, and quality of service (QoS) to meet every application's needs.
Written by Mohamed Ibrahim.