How can we bridge the chasm of Private 5G?

Private 5G offers many benefits for industries that require low latency, high reliability, and large bandwidth, such as manufacturing, healthcare, and transportation. However, despite the potential, its industry adoption has been slower than expected, mainly limited to early adopters in the industrial and academic innovation sectors. In this blog post, we will discuss some of the reasons for this slow adoption and how they can be overcome.

The Challenge  

One of the main reasons for the slow adoption of private or dedicated 5G networks is the lack of trust in 5G as a new technology. Many enterprises need more evidence of its reliability, especially when it involves sensitive data or critical operations, specifically in an industrial setting. The value of installing or upgrading networks to 5G seems to be mostly appreciated, but still, enterprises are falling back on proven network types for critical use cases.   

Another reason for the slow adoption of private 5G networks is the lack of industrial devices that support 5G connectivity. Although 5G-enabled smartphones and tablets are becoming more common, many industrial devices, such as sensors, cameras, robots, and AMRs, are still using legacy technologies. This limits the use cases and applications that can benefit from private 5G networks.  

A third reason for the slow adoption of private 5G networks is the lack of understanding regarding the benefits of 5G versus the capacities of existing technologies, such as Wi-Fi 6 or LTE. Many enterprises may not see the need to switch to private 5G networks if they are satisfied with their current wireless solutions or do not have demanding requirements. However, many enterprises do not have the tools or experience to understand which network type would best suit their current or future needs. Are Wi-Fi handover issues a challenge for the model of AGV planned? Is 5G needed to support new ML vision systems with onboard computing?   

Private 5G networks have great potential to transform industries and enable new levels of productivity, efficiency, and innovation. However, their adoption in the industry has been slow due to various challenges, such as lack of trust, lack of devices, and lack of understanding. To overcome these challenges, enterprises need to be educated on the opportunities and Industry 4.0 use cases that can be unlocked by not only 5G technology but hybrid deployments of 5G, Wi-Fi and other network technology.   

The Solution  

Often the critical element driving network planning is the accurate projection of the enterprise's future use cases and their impact on the network. It requires a full understanding of both capabilities of the various network options and demand-side data throughput patterns.   

The Zinkworks Networked Device Simulator (NDS) is built to address this challenge. It enables CSPs and connectivity resellers to rapidly, simply, and cheaply model single or multi-network deployments, such as Private 5G and Wi-Fi 6, and showcase their performance and suitability for a client's current or proposed industrial use cases.   

By combining data on both network capabilities and use case demands in a single 3D simulation, Sales Teams can easily drag and drop network infrastructure and networked equipment to visually demonstrate the impact of robots, vision systems, time-sensitive manufacturing systems, safety-critical applications, etc., in a virtual replication of a client's facility.   

Through using a bespoke, relatable, and accurate visual model of an enterprise's network needs, trust can be established that the proposed network solution is the right fit for their needs.   

If you would like to learn more about Zinkworks Networked Device Simulator send us a request on our contact page and our team will get back to you: www.zinkworks.com/contact/ 

Written by James McNamara.

.


Exploring the Intersection of Industry 4.0 and 5G

As you read the title, I’m sure you have heard these words from a blog or talk before. I bet they feel like the latest series of buzzwords. After reading this article, you will become more familiar with these terms and prove that these are not just empty buzzwords. Industry 4.0 and 5G are two innovative technological advancements that have the potential to change the way we live and work. Industry 4.0 refers to the fourth industrial revolution and is characterised by integrating advanced technologies such as artificial intelligence, the Internet of Things (IoT), and robotics into traditional manufacturing and industrial processes. 5G, on the other hand, is the fifth generation of mobile networks that promises to provide faster and more reliable communication than previous generations. 

Industry 4.0 has the potential to revolutionize manufacturing and production processes by integrating IoT, advanced sensors, and AI. This leads to improved production speed, reduced costs, and increased competitiveness. In addition, Industry 4.0 enables companies to collect and analyse vast amounts of data, which can be used to improve the quality of their products and services. With the use of robotics and automation in the production process, highly automated factories can operate with minimal human intervention, 24/7, leading to increased productivity, improved quality, and reduced costs. 

The integration of IoT in Industry 4.0 allows for the creation of smart factories, where machines and devices are equipped with real-time sensors that collect and transmit data. This data can then be analysed using AI algorithms, which can optimize production processes, improve quality, and reduce downtime. Furthermore, the use of advanced digital technologies such as 3D printing, augmented and virtual reality, and cloud computing can help companies design, test, and produce products more efficiently and cost-effectively. Despite the potential loss of jobs due to automation and the need for workers to adapt to new technologies and skills, Industry 4.0 is expected to significantly impact the global economy and how we live and work. 

5G provides faster and more reliable communication, which is crucial for the successful implementation of Industry 4.0. With 5G, manufacturers can communicate with their machines and devices in real-time, allowing for improved monitoring, control, and automation of the production process. This can lead to increased efficiency and enhanced productivity. The key benefits of 5G include increased speed, reduced latency, and improved capacity, allowing for the connection of many devices simultaneously. This enhanced capacity will be crucial for successfully implementing the Internet of Things (IoT), as it will allow for the connection of millions of devices, from smart homes to industrial equipment. It has the potential to revolutionise industries such as healthcare, education, and entertainment, allowing for the creation of new products and services that were previously not possible. 

5G is a significant technological advancement that has the potential to impact the way we live and work. However, it also raises some concerns, such as the need for substantial investments in infrastructure and the potential for security and privacy issues. Despite these challenges, 5G is expected to significantly impact the global economy and provide new opportunities for innovation and growth.   

In conclusion, Industry 4.0 and 5G are two technological advancements that can change the way we live and work. Industry 4.0 has the potential to revolutionize manufacturing and production processes, leading to increased efficiency, improved quality, and reduced costs. With the help of 5G, manufacturers can communicate with their machines and devices in real-time, leading to improved monitoring, control, and automation of the production process. Despite the potential challenges associated with these advancements, such as the loss of jobs due to automation, Industry 4.0 and 5G are expected to significantly impact the global economy and how we live and work. 

At Zinkworks we have created a product called Networked Device Orchestrator (NDO) which is purpose-built for Industry 4.0. It is designed to enable the orchestration of connected equipment, predicting and avoiding demand spikes before they impact the network, and empowering enterprise users to align their internal business rules to the available network capacity. Learn more: www.zinkworks.com/solutions/

Written by Aaron Fortune.


Kubernetes Operators: When not to create one

Over the past few years in the realm of DevOps and Kubernetes in particular, Kubernetes Operator pattern has been a trending topic. In my personal experience, while working with different projects for different applications, it was evident that some of the software development teams, or organizations were too quick to jump on the Kubernetes Operator bandwagon, without analyzing the real-world problem they were trying to solve through the implementation of a Kubernetes Operator.

More often than not, the implementation of a kubernetes operator was done only because it was understood as the most trending topic or the implementation pattern in the kubernetes domain at that point. I have seen certain software development teams going to the extent where they implement different wrapper CRDs, or controllers around an open source kubernetes operator, so that certain organizational practices and standards were encapsulated to these wrapper controllers while giving the organizations the ability to deploy and use the open-source Kubernetes Operator in their application stack. If simply put, all that cost, time and effort from the software developers were invested in these kinds of implementations only to give the ability to use a specific, well known Kubernetes Operator in their application stack, while they had completely forgotten why anyone should really have a Kubernetes Operator or what actual problems a Kubernetes Operator is supposed to solve fundamentally in your application.

The purpose of this article is not to criticize the use of Kubernetes Operator pattern or manifest it as a bad practice. It is in fact quite the opposite. Kubernetes Operator pattern is a highly useful concept in a Kubernetes stack, which helps devops engineers or software developers to tackle certain complexities of the application lifecycle while deployed on a Kubernetes platform. What we rather intend to illustrate through this article are the actual real world use cases that Kubernetes Operator pattern is supposed to solve, by first discussing the use cases which might not be applicable for it.

Let us first look at what is meant as an operator pattern in Kubernetes. There are three key points we should aim to suffice through an operator implementation.

  1. It should be a piece of software that automates a repeatable task of a stateful application and replaces the necessity of a human operator.
  2. It should be a software extension to the Kubernetes API that makes use of a Custom Resource.
  3. It should follow the Kubernetes principals, mainly the Control Loop.

While all three points above have an equal importance, we think the most important point out of three is the first one.

Now let us jump into our main topic, when should we not create an operator, or if we try to rephrase it in a more diplomatic way, “When should we rethink our decision to write an operator”.

Is my application a stateful application?

Even though official documentation for Kubernetes on operator pattern does not strictly mention about the application you are building the operator for, to be a stateful application, what we should understand is, an application which does not have a state will not really require some controller to handle its deployment lifecycle. Notice how we have used the term “controller” here instead of “operator”. We will discuss this in a moment.

The simple reason being, if a particular application does not have a state, what it really means is you can pretty much use the native features of Kubernetes such as a deployment controller to handle the full life cycle of that application. But when an application is a stateful application then there is a possibility that you cannot freely replace a given replica of that particular application instance with a new replica (like it is supposed to happen in the kubernetes world all the time).

There is a chance some additional work such as leader elections, handling checkpoints, managing quorums, or restoring a backup is there that must be done when bringing up a new replica. Now, some of these tasks may not essentially be a part of the application code itself, maybe these are some manual steps that a human operator must execute. So, this is where the real use of an operator on kubernetes can pay off well. You can write a piece of software to encapsulate all that domain / application specific logic and then combine it to the Kubernetes API as an extension.

Where do the CRDs and Control Loop come into play in this context? It is quite simple, CRDs pave the way for the end user to declare the desired state of the application that the operator has to maintain. End user will create / update a CR declaring the spec of the desired state, and then through the control loop, the operator will try to bring the application to that desired state. It will also report back the status of the application to the same CR.

Does this mean, we cannot write a similar operator to a stateless application? You absolutely can. However, in that case this should rather be considered as a controller not an operator. Also, if you are thinking to write such a controller to manage the lifecycle of a stateless application, it would rather be an overkill because there should be other means to achieve the same thing just using the native Kubernetes API without having to extend it with a whole new set of CRDs. Or this could even mean that your actual problem must be lying somewhere else, and you are misusing the operator pattern simply because you either want to use a Kubernetes object to handle a non-kubernetes resource in your application or you are trying to fix a configuration management problem through it.

Any code is a liability

There are many frameworks available now to make the implementation of an operator an easy task. However, it still requires a certain effort from the developers to understand the complexity of the application and the actual business requirement you need to address through the operator’s control loop.

It is also seen now-a-days that certain teams, organizations develop operators for 3rd party open-source applications that are not owned by these teams or organizations. The logic written into an operator is an imperative workflow that couples your application’s business logic into the Kubernetes control loop. This may initially not look concerning to anyone, however the team that develops the operator will eventually be responsible to maintain the operator codebase to support the potential changes in the actual application that can impact its deployment lifecycle. Even though writing an operator is not a difficult task now with the availability of different frameworks, it will still be something more complex than writing a piece of configuration to support deployment lifecycle with the use of native Kubernetes resources.

Also, unless you have a clear understanding of the potential changes coming into the particular application you are writing the operator for, you are putting yourself in a position where you have to invest a dedicated time and effort just to maintain the operator codebase to adhere to those changes from time to time, as they come.

This is the reason why it is recommended to leave the decision of writing an operator to the application owners or at least get enough understanding of the future changes the application may face, before starting writing an operator yourself for that application.

In either case, it is better to make yourself liable for a piece of configuration that you can easily modify than to make yourself liable for an entire codebase of an operator. So, it is a wise decision to seek alternative ways that you can handle complexities in the lifecycle of an application rather than writing a piece of code that works as an operator and making yourself liable to it, especially if you do not own that application.

Resource Concerns: Operators are not exactly part of your actual workload

In certain cases, you may have to run your application in a resource critical cluster. When you have an operator to manage the lifecycle of this application, the operator itself will require a certain level of resources (cpu, memory, network) allocated towards it from the same cluster where you run the workload.

What we expect from an operator is to maintain the lifecycle of a given number of application instances, by reconciling their state to a desired state, as specified by the user through the custom resource. Therefore, operators are a part of your control plane rather than the workload itself. Now imagine a situation where you must provision a large number of application instances.

This means a few things,

  1. You will have to create an equal number of custom resource objects.
  2. The operator or operator instances will be running reconciliation loops to handle the state of all the application instances represented by each individual custom object. This could be resource intensive operation.
  3. You are using the Kubernetes etcd store to keep track of all of the custom objects and your operator will be communicating with Kubernetes API quite frequently.
  4. On all the other occasions where your custom objects are in the desired state, the operator will sit idle, but it still requires some resources from the cluster.

As you can see, having an operator to manage such a large number of custom objects could impact your cluster resource wise, in multiple ways.

Therefore, when you want to run your stateful application in a Kubernetes stack, writing an operator may sound appealing but you must remember the impact it may have on your cluster resources which is primarily meant for running your workload.

Security Concerns: Is it worth running an operator with elevated privileges for the duration of your application?

This is one of the reasons why writing an operator should be your last resort. An operator is a highly privileged entity compared to your actual application.

If we take a step back and consider what your operator does, all it does is maintain the state of your application instances to a desired state as specified by the CR. For doing so, it requires a certain level of privileges to your Kubernetes API. Now depending on the type of resource objects the operator is supposed to manage, you can grant permission to specific Kubernetes resources in either “namespace” scope or “cluster” scope. In most cases, what we have seen in certain existing open-source operators is that they sometimes get deployed with RBAC permission to your cluster resources than they require.

Nevertheless, what is important here is that the operator will be running with all those permission to your Kubernetes cluster for the duration of your application, even when an actual reconciliation of the custom resources happens occasionally. So, the amount of time the Kubernetes Operator will require these permissions to carry out its functionality is only a fraction compared to the time it will actually be running.

Considering these aspects of an operator, if you are thinking of writing one that will most probably do one-off tasks or tasks that will happen in a less frequent window (e.g., backup, restore), it could be worthwhile to analyze the possibility of using jobs or cron-jobs available in the standard Kubernetes API than investing all your effort to build a complex piece like a Kubernetes operator.

Configuration Management: Operators should not be a solution to your configuration management problem

Something that we have seen in common in most operator implementations is that it helps end-users to manage the application configurations through a well-structured object like a custom resource. A custom resource has a predefined schema, managed through a custom resource definition (CRD). This CRD will be a dedicated one for the application, so validating the inputs a user can provide to configure the application state is more controlled and streamlined. When you consider the standard ways that a user can pass application configurations, it is either using a configmap or a kubernetes secret which are more generic approaches.

A frequent implementation pattern that we have seen is that certain operators are sometimes implemented to just make use of this structured configuration that can be achieved using a CRD. These operators mainly target to expose the application configurations via a CRD, so there is a control over the user inputs. They should rather be called as controllers than operators because they do not essentially do anything specific to handle any application state related activities during the application lifecycle. While anyone is free to write a piece of software that is meant for handling configurations of an application through a CRD, it also may be an overkill. Because there is much more generic tooling available in the Kubernetes ecosystem, to achieve the same thing. For example, for someone using helm to manage a deployment and lifecycle of an application, certain features such as “--verify”, or json-schema validation are available to validate the user inputs which can eventually be mapped to a generic resource such as a configmap.

Question we should ask here really is, “is it really worth writing an application specific piece of software to manage the configuration, when there is much more generic tooling available to specifically address configuration related issues of applications deployed on Kubernetes?”

These are the key areas that we would like to think a devops engineer or a software developer should consider, before starting to write a Kubernetes operator. We would like to end this article with the following note, “The fact that it is possible to write a Kubernetes operator as a solution to a given problem does not always mean you should write one.

References:

https://kubernetes.io/docs/concepts/extend-kubernetes/operator/

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/

https://thenewstack.io/kubernetes-when-to-use-and-when-to-avoid-the-operator-pattern/

https://sdk.operatorframework.io/docs/best-practices/best-practices/

Written by Nayanajith Chandradasa.


Introduction to Private 5G Networks

A Private 5G Network is an enterprise-dedicated network tailored to deliver the latest advancements of 4G LTE and 5G technology and drive the digital transformation of businesses and organisations across industries.

Private 5G provides secured communications with high bandwidth, low latency, and reliable coverage to connect people, machines, and devices. Private 5G solutions are ideal for IoT-intensive applications like intelligent manufacturing and sensitive environments like ports and banks.

More securely and efficiently than public 5G and LTE networks, private 5G offers solutions that provide connections to authorised users only within the enterprise and processes the generated data locally, in isolation from the public network. Easy to deploy, operate, and scale to meet all operational needs.

Spectrum, Coverage and Speed

The range of a private 5G network can vary from a few thousand square feet to thousands of square kilometres, depending on the power of the radio transmitter, the band used, and the user's requirements. A typical 5G radio that operates on low, middle, and high bands provides the following frequency ranges:

Deployment Scenarios

The 5G system is disaggregated into independent components known as "network functions" (NF) that communicate through a standard API. These NFs are accountable for the operation of mobile networks, including authentication, IP address allocation, policy control, and user data management. Service-Based Architecture makes the 5G system very flexible and able to add new services and applications to meet the needs of any industry.

5G Architecture

Figure 1 - 5G Architecture

Control and User Plane Separation adopted in the 5G architecture allow operators to separate the 5G control function from the data forwarding function. For example, the control plane can be deployed centrally, whereas the user plane function (UPF) could be deployed flexibly at any location within the network to accommodate the various data processing requirements.

The private 5G network architecture can be deployed in different scenarios to meet each customer's needs.  We can categorise the deployment based on the level of isolation and integration with the public network into three scenarios as follows:

Isolated Private 5G                     Shared Private 5G Network

Figure 2 - Isolated Private 5G                                               Figure 3 - Shared Private 5G Network
1 - Isolated Private 5G Network

Enterprise hosts and operates the 5G network (complete set: gNB, UPF, 5GC CP, UDM, MEC), where the network is physically isolated from the public network. Despite the high cost associated with deploying this scenario, it guarantees complete data security, reduces the likelihood of a data breach, and provides ultra-low latency connections.

2 - Shared Private 5G Network

A shared private 5G network scenario uses an operator's public network to reduce installation costs. Based on the business needs, the customer can choose the proportion of components they host and manage locally, and the elements will share with the mobile carrier.

MEC and UPF may be deployed on-site on premises like smart factories, stadiums, and cinemas, enabling a private 5G architecture with minimal latency and future changes. In addition, the business owner can control the radio access network locally to allow quick and reliable connection (RAN).

3- Private 5G Over Slicing

Depending on the application’s requirements, a Radio Access Network (RAN) may be installed on-site and connected to the public network via a dedicated data slice that provides private 5G service.

Private 5G Use Cases

Smart ports and manufacturing facilities:
Connect autonomous cranes, robots, drones, legacy machines, or edge gateways within facilities to achieve industrial network service level metrics.
Enable security applications:
Connect video cameras, fingerprint scanners, face detection, and automatic licence plate readers to private networks with guaranteed and dedicated bandwidth.
Enable smart operations:
Intelligent and secured operations are enabled using various private-tailed applications such as geofencing, digital twins, mobility prediction, and task scheduling.
Zinkworks and Private 5G

Zinkworks provides a Networked OT Orchestration platform purpose-built for Industry 4.0 and private 5G. Customers can use various automation solutions and ML-based prediction models developed by Zinkworks to monitor the network's performance and manage network resources more efficiently and securely. In addition, customers can create policy and service profiles with customised bandwidth, latency, and quality of service (QoS) to meet every application's needs.

Written by Mohamed Ibrahim.


RxJava Reactive Streams

Introduction to Reactive Streams

Before we talk about what RxJava is and fully understand it we must first comprehend some concepts and principles that are behind the creation of the API. In reality, RxJava is just part of a broader project called ActiveX which applies the concepts that will be explained here not only for Java but also to other platforms such as Python, Go, Groovy, C#, and many others. It is worth mentioning that ActiveX is not the only one to implement these ideas. Spring Boot Framework also has its own implementation and is called Spring WebFlux (result of the Spring Project Reactor).

Reactive Stream is an initiative to provide a standard for asynchronous stream processing with non-blocking backpressure.

— reactive-streams.org

RxJava as well as WebFlux are implementations of the Reactive Streams. But what exactly does this statement above mean? Traditional methods when called normally get blocked until it finishes whatever it needs to do. If the method is doing only mathematical calculations or checking some logic out of their arguments the non-blocking nature of the asynchronous stream processing will not make much difference, but if we start talking about accessing the file system, save a file to some device, read information from service, or communicate to a microservice remotely that is when things start to get interesting.

A scenario particularly interesting for Reactive Streams is in a microservices environments such as cloud environments. In such architectures we have many services talking to each other and every time this communication takes place the service that initiated it will need to wait for some time until it takes some action. On top of that, the agent providing the service usually does not respond to a single client but to multiple ones. It is in this sequence of events that Reactive Streams excel!

Reactive Streams solves the problem effectively by using something called Event Loop. Every time a new request comes to the Reactive Streams the thread used by the method does not get blocked. Just after it executes the request it goes and does something else, it does not wait. Only when request is done the Event Loop adds to the queue this new event and the next available thread is processed which means, no wasted resources! The usage time of every thread is used to the fullest.

Fig. 1 – Reactive Event Loop

The no-reactive method needs to instantiate a new thread every time a new request is made which means that if you have too many simultaneous requests you can end up with multiple threads sitting there just waiting, doing nothing and consuming resources.

Last but not least, reactive streams must support backpressure. This means that the receiver (Subscriber) of a reactive stream can control the number of events it is able to process. This is useful in cases where the sender (Publisher) produces more events than the receiver can handle, and backpressure is a mechanism to allow the sender to slow down the event generation in order to allow the receiver properly to process them.

Reactive Streams can be considered an evolution of the well-known observer pattern plus the addition of functional paradigm bringing to the mix a very powerful API. This API allows for the creation of a chain of methods bringing a very declarative style of programming as well as abstracting out low-level threading, synchronization, thread-safety, concurrent data structures, etc.

Reactive Reference Implementation

As mentioned, it is not only the ReactiveX project, more specifically RxJava, that implements the Reactive Streams standards which means that you are going to find similar structures and elements in different projects although using distinct names depending on each project.

At a very high level every Reactive Stream implementation has a Publisher,  the entity that produces the data to be consumed by a Subscriber.  Another important architectural element is the Subscription. The Subscription represents the message control link between Publishes and Subscribers itself by which it gives the Subscriber the capability to inform the Publisher how much data it can handle, in other words the entity that makes backpressure possible. In addition to that, between Publisher and Subscriber, we normally also have a chain of functions, known as function chain. It is through this chain of functions where all sorts of operations are applied over the streams such as Map, Filter, FlatMap, and many more.

Fig. 2 – Reactive Streams Base Classes

Keep in mind that Reactive Streams has its style on the bases of the Functional Paradigm, and therefore, having the knowledge of concepts such as immutability, pure functions, high-order functions, etc., is essential to fully understand the RxJava and properly use its API.

Some RxJava Code at Last

I know there is a lot of information to absorb before the first lines of code, but trust me, what I presented before will save you from a lot of trouble when developing a Reactive Functional Programming API such as  RxJava.

Hello World

package rxjava.examples;

import io.reactivex.rxjava3.core.*;

public class HelloWorld {
    public static void main(String[] args) {
        Flowable.just("Hello world").subscribe(System.out::println);
    }
}

Looking at this simple hello-world code might seem odd for someone used to working with traditional Object-oriented programming (OOP) only, but now that we have set the scene for the Reactive Functional Programming on the previous sections it will be much easier to understand what is going on here.

The first thing to note is the use of the Class Flowable. It is important to remember that here everything is a constant infinite flow of data or stream, and the Flowable class represents exactly that. Even to print a single String you need to somehow provide it through a stream. In such cases, Flowable gives the just method that gives an Observable object with just one item. You can think of Observable as a Publisher class mentioned before. This means that you need to subscribe to a Subscriber to read what is coming from the streamer. Here the subscriber simply prints out whatever is coming from the stream.

This API has much more power and flexibility than shown in this simple hello-world example, but it is when dealing with millions of data that Reactive Streams approach really shines.

Of course, there is a lot more to talk about regarding RxJava I have barely scratched the surface here. Apart from Flowable and Observable base classes there are also Single, Completable and Maybe base classes to deal with more specific situations that I haven’t even touched on here in this article.

Talking about everything RxJava is able to do would take many more pages, not a simple article like this one. The goal here is to just give a high-level overview of RxJava, the main concepts behind any Reactive Streams application as well as about Reactive Functional Programming paradigm.

Final Thoughts

I hope to further explore the RXJava API but this article explains the basics for any Reactive Stream which should enable the reader to quickly understand any implementation of the Reactive Streams.

Also, this article does not present examples on how powerful Reactive Streams standard is over the traditional blocking approach. To give the readers of this article an insight of its power I conducted a small experiment where I implemented a very simple REST service using Reactive Streams versus a traditional blocking one, and the result was pretty impressive.

For closure, I will leave the reader to take their own conclusions based on the result graphics of this experiment:

Fig. 3 – Traditional Blocking API Results

Fig. 4 – Reactive Streams API Results

Written by Berchris Requiao.


Zinkworks Apprenticeship Programme – Ingrid’s Experience

Zinkworks apprenticeship programme is in association with the Fastrack into Information Technology (FIT)  programme.

Ingrid joined Zinkworks through the FIT apprentice program with Athlone Training Centre in 2021 and is about to begin her final semester. Here she describes her experience at Zinkworks thus far.

“I am making a career transition and balancing studies, work, and family time. I am a participant in the FIT apprenticeship program at Zinkworks as well as a master’s degree student in Mobile Development at PUC-Minas Brazil. Zinkworks has allowed me to put into practice everything I studied in college and the FIT course.

Since starting at Zinkworks I had a basic understanding of C and C++, but I have been learning Java on my own with the help of my colleagues. My knowledge was used within a large and complex system, and I was mentored by members of my team who gave me effective solutions, taught me how to resolve errors, and about dependencies, management, and automation tools. Team members helped me understand the project and how to find solutions to bugs during the project... As a beginner, I found the start difficult, but with everyone around me helping, I felt confident that I could make mistakes and try again.

FIT and Zinkworks encourage their apprentices to acquire industrial IT certificates during their studies. I chose to obtain the Introduction to Programming Using Java certification from Microsoft at the beginner level because I had Java experience within the company. To prepare for the certification, I used video courses at O'Reilly a subscription service that Zinkworks provides.

I’m starting to study for a Javascript Specialist certification at Intermediate Level. I don't work with Javascript currently, but my studies are more intense. Javascript can be a little challenging for some developers. Javascript is a language that can be used with a variety of frameworks and technologies. It's challenging but I love it! I've done some projects for college as my dream career is Mobile development, I must learn Frontend too. Access to O'Reilly has allowed me to read Javascript books, take some courses, practice the old questions from IT Exams and study with official CIW material.

I am very lucky to be part of Zinkworks, and FIT supports me in my professional growth. I work with many people who are willing to help and show me where I can improve every day. Thank you to everyone who is supporting me and helping me to grow every day.”

Ingrid is one of three apprentices in their final year of study. This year, in 2022, six more apprentices have joined Zinkworks and begun their work experience as part of the FIT apprenticeship program.

To find out more about the apprenticeship programme contact us at info@zinkworks.com.

Click here to learn about careers at Zinkworks.


The experience of being an outed LGBTQ+ person in tech - Meet Adheli

Software Developer Adheli Tavares speaks about her experiences being a member of the LGBTQ+ community.

1. How would you describe your journey to becoming a proud member of the LGBTQ+ community?

Complicated and complex. It’s a process that is always ongoing. There’s confusion and doubts until you find yourself comfortable in your own skin. Therapy was an ally in my process, to understand that all those feelings of inadequacy were uncalled for.

2. Have you found any differences between living in Ireland and the attitude towards the LGBTQ+ community (or you personally) versus living in other parts of the world?

Definitely. I think the main reason is that people here don’t really care.  Although I have been targeted a few times with some not very nice words, I do feel a lot more comfortable walking hand in hand with my partner here than back home.

3. How have you found the attitudes of others towards members of the community in the tech world (globally and in Ireland)?

I have met mostly queer women in the tech industry. Because it’s still a male dominant world, the behaviour I encountered the most was “so you like women then you are one of the guys” which is wrong.  This shows another level of sexism and the old stereotypes around how lesbians must look and act like a man. I have accepted this behaviour previously, mostly to feel included and avoid the mean comments, most of them hidden behind “jokes”.

4. Have you found Zinkworks to be an open and accepting place? How so?

Yes, Zinkworks has been super cool to work with. I think it kind of goes back to the fact people don’t really care as long as you’re a good person and good employee!  Acceptance and respect go along with mental health, which is another thing that Zinkworks has been the best place for regarding support.

5. What further support is needed for the LGBTQ+ community in the workplace and socially?

The “removal” of the necessity of coming out is my big dream. It is good that English is a friendly non-binary language, which helps a lot. I can say that it has taken out a lot of stress when referring to my partner and the questioning when someone decides to label themselves. I think it’s very personal how one describes themselves, then for another person, even someone they do not know to question every little detail is demeaning.

6. What can allies of the community do to support members in their workplace?

First of all, respect their identity (gender/sexuality) and not let it be something that will create a pre-judgement of their work abilities. Let them be heard. If you have doubts about how to talk to someone, ask their name and their pronouns.

When people are asking questions we can differentiate between when someone has a genuine question because they are curious and want to understand, from someone that is just lazy and expect us to be their personal LGBTQ+ dictionary.

At the end of the day, I have been through so much, like working with international teams (before moving to Ireland), learning the quality side of development, managing teams, and laughing with teammates when everything went right. Crying when things went to space. Stayed until late to fix something, or left earlier because it was too tiring. And you, a non-LGBTQ+ person might think “that sounds like some of the stuff I have done” and that’s because we are just like you. People.

Thank you to Adheli for sharing her experiences.

If you need support as part of the LGBTQ+ community you can visit here.

To read more blogs from Zinkworks employees visit here.


5 ways to improve your workspace based on science

Regardless of where we work—at home, in an office. We can all do a few simple things to our work environment to optimise our productivity. Below is a shortlist of the most effective things—none of which require purchasing any products or equipment. Anyone can use these tools to:

  • Maintain alertness and focus longer.
  • Improve posture and reduce pain (neck, back, pelvic floor, etc).
  • Tap into specific states of mind (creativity, logic, etc.) for the sake of work.

1. Sit or stand ( Or Both )

It is best to arrange your desk and workspace so that you can work sitting for some time—10-30 minutes or so for most people, and then shift to work standing for 10-30 minutes, and then go back to sitting. Research also shows that it’s a good idea to take a 5-15 minute stroll after every 45 minutes of work. There is evidence that such a sit-stand approach can reduce neck and shoulder and back pain.

2. Effects of TIME of the day has on you

We are not the same person across the different hours of the day, at least not neurochemically. Let’s call the first part of your day (~0-8 hours after waking up)

“Phase 1.” During this phase, the chemicals norepinephrine, cortisol, and dopamine are elevated in your brain and body. Alertness can be further heightened by sunlight viewing, caffeine and fasting.

Phase 1 is ideal for analytic “hard” thinking and any work that you find particularly challenging. It isn’t just about getting the most important stuff out of the way; it is about leveraging your natural biology toward the best type of work for the biological state you are in.

“Phase 2”: is ~9-16 hours after waking. At this time, serotonin levels are relatively elevated, which lends itself to a somewhat more relaxed state of being—optimal for brainstorming and creative work.

“Phase 3”: ~17-24 hours after waking up is when you should be asleep or try to sleep. During this phase, do no hard thinking or work unless, of course, you must, keep your environment dark or very dim and the room temperature low (your body needs to drop in temperature by 1-3 degrees to fall asleep and stay asleep).

3. Where your screen is and where you look ARE important

There’s a relationship between where we look and our level of alertness. When looking down toward the ground, neurons related to calm and sleepiness are activated. Looking up does the opposite. This might seem wild, but it makes sense based on the neural circuits that control looking up or down.

Standing and sitting up straight while looking at a screen or book that is elevated to slightly above eye level will generate maximal levels of alertness. To get your screen at or above eye level and not work while looking down at your screen may take a bit of configuring your workspace, but it’s worth it for the benefits to your mind and work.

4. Set your background sound

Some people like to work in silence, whereas others prefer background noise. Some kinds of background noise are particularly good for our work output. Working with white, pink, or brown noise in the background can be good for work bouts of up to 45 minutes but not for work bouts that last hours. So, use it from time to time. These are easy to find (and free) on YouTube or in various apps (search by “whitepink, or brown noise”).

Binaural beats are a neat science-supported tool to place the brain into a better state for learning. As the name suggests, binaural beats consist of one sound (frequency) being played in one ear and a different sound frequency in the other ear. It only works with headphones. Binaural beats (around 40 Hz) have been shown to increase certain aspects of cognition, including creativity and may reduce anxiety.

5. Room type can make a difference

There is an interesting effect of workspace optimization called the “Cathedral Effect,” in which thinking becomes “smaller”—more focused on analytic processing when we are in small visual fields. The opposite is also true. In short, working in high ceiling spaces elicits abstract thoughts and creativity, whereas working in low ceiling spaces promotes detailed work. Even relatively small differences (a two-foot discrepancy in ceiling height) have been shown to elicit such differences. The takeaway: consider using different locations: rooms, buildings, indoors or outdoors to help access specific brain states and the types of work they favour.

Very insightful blog post written by Release Manager,  Colm Nibbs

Click here to read more blogs from Zinkworks.


GitHub & Microsoft Teams Integration

Seeking improvement opportunities to enhance productivity and collaboration is a crucial factor when working with technology. The GitHub integration for Microsoft Teams allows developers to improve their communication by automatically posting messages about issues, pull requests, deployment status, and more. Once GitHub and Microsoft Teams platforms are linked, this allows various options, such as adding comments, closing, and reopening issues or even making pull requests, without leaving your chatbox.

A considerable amount of time might be spent by developers while communicating about code changes, monitoring issues and other GitHub-related activities. This integration improves this communication and optimises the developer’s time, while also encouraging faster discussions on code reviews. All of this is happening right in your Microsoft Teams channel, which tends to be the natural place of ideas and collaboration.

Step 1 – Installation

First, we are going to install the GitHub App in our Microsoft Teams.

  • Go to the Microsoft Teams app store and install the GitHub app or you can directly install it from here.

  • Upon installation, a welcome message is displayed.
  • Use the @GitHub handle to start interacting with the app

Step 2 – Get Started

At this stage, our Microsoft Teams and GitHub user accounts are not linked yet. The following will link the two accounts.

  • Just authenticate to GitHub using a @github signing command or try to subscribe to your repository.

  • A message “Connect GitHub account” is displayed as shown in the following image. Just click on the button to connect the GitHub account.

Once the channel is created.

  • Go to the channel and look for the GitHub icon.
    • if the icon is not visible at the bottom of the channel, you must click on the “…” and search for GitHub integration with Microsoft Teams, as shown in the following image.

  • Once GitHub is set up on Microsoft We can subscribe to the repository, as shown in the following image.

  • Once the repository is subscribed, we will receive notifications as described above.

 

This whole process should be done once. After that, we can subscribe to as many repositories as we need, following just steps 8 to 10.

 

GitHub provides a lot of features to customise your subscription and keep the team up-to-date without switching to different platforms.

For more information go through the GitHub Documentation:

https://github.com/integrations/microsoft-teams/blob/master/Readme.md

 

Click the here to read more of Zinkworks blogs.


A method to override external domain name resolution in coredns

There are occasions when you must have more control over the domain name resolution that is taking place inside a Kubernetes cluster.
For example, imagine the following scenarios.
  1. You are trying to replicate an issue that you observe in a different environment, in your dev K8s cluster. To successfully replicate the issue, you need to ensure that you use the same hostnames, application FQDNs that were used in the original environment.
  2. You are testing an application running on Kubernetes that requires access to third party external endpoints over the internet. You need to ensure that these third-party external connections are created towards some known, test services, not the actual ones.
  3. You need an alternative and a more centralized way to control the responses received to DNS queries by the application pods. (e.g. You do not like to use “hostAliases” option with K8s pods)
On all the above occasions, you can use the following method to override the responses sent by the internal Kubernetes DNS to your application pods.

Step 1

First, we are going to run a “dnsmasq” server in the same Kubernetes cluster as a pod. It will be associated with a configmap through which we can manipulate the DNS records. Other than that, the dnsmasq pod will be exposed via a clusterIP type Kubernetes service.

To run a “dnsmasq” server as a container, we can first create docker image using following Dockerfile.

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

.
FROM alpine:latest
RUN apk –no-cache add dnsmasq
VOLUME /var/dnsmasq
EXPOSE 53 53/udp
ENTRYPOINT [“dnsmasq”,”-d”,”-C”,”/var/dnsmasq/conf/dnsmasq.conf”,”-H”,”/var/dnsmasq/hosts”]
 

[/gem_textbox]

As you may notice in the ENTRYPOINT of above, we are running “dnsmasq” in foreground (-d) as well as we pass a configuration file (-C) and a hosts (-H) file to it. During the build, these files do not necessarily need to exist in the docker image. However, during the runtime we will mount relevant configuration files to /var/dnsmasq/conf/dnsmasq.conf and /var/dnsmasq/hosts location accordingly. 

Now you can build the docker image with “docker build –t dnsmasq:latest ” command. You may push it to a docker image repository that is accessible by your Kubernetes cluster, afterwards

Step 2

Now let us create a couple of configmaps to pass the dnsmasq.conf and hosts files to the running dnsmasq pod.

In below example, we are creating a fake DNS record, that resolves www.zinkworks.com to 10.100.1.1 address. Also, we will create a host’s file record that maps example.zinkworks.com to 10.100.1.2

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

 
#dnsmasq.conf file 
kind: ConfigMap 
apiVersion: v1 
metadata: 
  name: dnsmasq-conf 
  labels: 
    app: dnsmasq 
data:  
  dnsmasq.conf: | 
    address=/www.zinkworks.com/10.100.1.1 
 
#dnsmasq hosts file 
kind: ConfigMap 
apiVersion: v1 
metadata: 
  name: dnsmasq-hosts 
  labels: 
    app: dnsmasq 
data:  
  hosts: | 
  10.100.1.2 example.zinkworks.com 

 

[/gem_textbox]

You can create above configmaps in the namespace where you will run the dnsmasq pod, using “kubectl create –f“or “kubectl apply –f” commands. 

Next let us create a dnsmasq pod using the docker image built in step 1. We will also mount the configmaps created previously to /var/dnsmasq/conf/dnsmasq.conf and /var/dnsmasq/hosts as files. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

 
apiVersion: v1 
kind: Pod 
metadata: 
  labels: 
    app: dnsmasq 
  name: dnsmasq 
spec: 
  containers: 
  – image: dnsmasq:latest 
     name: dnsmasq 
     ports: 
     – containerPort: 53 
       protocol: UDP 
       name: udp-53 
    volumeMounts: 
    – name: dnsmasq-conf 
      mountPath: “/var/dnsmasq/conf” 
    – name: dnsmasq-hosts 
      mountPath: “/var/dnsmasq” 
    securityContext: 
      capabilities: 
        add: 
        – NET_ADMIN 
  dnsPolicy: None 
  dnsConfig: 
    nameservers: 
    – “8.8.8.8” 
  restartPolicy: Always 
  volumes: 
  – name: dnsmasq-conf 
    configMap: 
      defaultMode: 0666 
      name:  dnsmasq-conf 
  – name: dnsmasq-hosts 
    configMap: 
      defaultMode: 0666 
      name:  dnsmasq-hosts 

[/gem_textbox]

Above is an example pod definition to run the dnsmasq container with desired configurations. Notice how, dnsmasq-conf and dnsmasq-hosts configmaps are mounted as files to the pod. Also notice the “NET_ADMIN” capability given to the container. This will allow you to run the container with UDP port 53.  

You can create the pod by running “kubectl create –f” or “kubectl apply –f” commands on above pod definition. 

Once the dnsmasq pod is created, you can confirm if it is running fine by looking at the pod logs.

Here is an example output. In this case, it is assumed that dnsmsq pod is created in dnsmasq namespace. 

$ kubectl logs –namespace dnsmasq dnsmasq –f 
 
dnsmasq: started, version 2.86 cachesize 150 
dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-UBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-cryptohash no-DNSSEC loop-detect inotify dumpfile 
dnsmasq: reading /etc/resolv.conf 
dnsmasq: using nameserver 8.8.8.8#53 
dnsmasq: read /etc/hosts – 7 addresses 
dnsmasq: read /var/dnsmasq/hosts – 1 addresses 

Finally, we will expose the dnsmasq pod via a kubernetes service. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

 
apiVersion: v1 
kind: Service 
metadata: 
  name: dnsmasq 
  labels: 
    app: dnsmasq 
spec: 
  type: ClusterIP 
  ports: 
  – name: udp-53 
     targetPort: udp-53 
     port: 53 
     protocol: UDP 
  selector: 
    app: dnsmasq 

[/gem_textbox]

In the future steps, we may need to use the service IP assigned to the dnsmsq service to send our DNS queries to the dnsmasq container. To find the service IP of the dnsmasq service, you can run “kubectl get svc” command on the namespace where you run dnsmasq pod.  

In our example the dnsmasq service IP is 10.108.96.48. 

$ kubectl get svc –namespace dnsmasq 
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE 
dnsmasq   ClusterIP   10.108.96.48   <none>        53/UDP    15s 

 

You can further check if the domain name resolution is working as expected in the dnsmasq container by running a few nslookup commands from a different pod. 

 In below example, we are running a pod called ubuntu-debugger in the same cluster from which we can check the responses sent by the dnsmasq. It is assumed that ubuntu-debugger pod has nslookup utility. 

 $ kubectl exec -it ubuntu-debugger — /bin/bash 
root@ubuntu-debugger:/# nslookup www.zinkworks.com 
Server: 10.96.0.10 
Address: 10.96.0.10#53 

  

Non-authoritative answer: 
Name: www.zinkworks.com 
Address: 46.101.47.85 

  

root@ubuntu-debugger:/# nslookup www.zinkworks.com 10.108.96.48 
Server: 10.108.96.48 
Address: 10.108.96.48#53 

  

Name: www.zinkworks.com 
Address: 10.100.1.1 

  

root@ubuntu-debugger:/# nslookup www.google.com 10.108.96.48 
Server: 10.108.96.48 
Address: 10.108.96.48#53 

  

Non-authoritative answer: 
Name: www.google.com 
Address: 142.251.43.4 
Name: www.google.com 
Address: 2a00:1450:400f:804::2004 

Notice how in the first nslookup command for www.zinkworks.com has returned the actual public IP instead of the fake one we provided to the dnsmasq container in dnsmasq.conf file. This is because this external DNS queries are handled by the coredns and they are by default forwarded to the nameservers specified in /etc/resolv.conf file. 

 Now in the subsequent nslookup commands we specify the service IP of the dnsmasq service which is 10.108.96.48. This ensures that the DNS query is processed by our dnsmasq container directly. As you notice in the second nslookup command for www.zinkworks.com, We have received the fake IP we configured in the dnsmasq container through the dnsmasq.conf file.  

Any other query that dnsmasq service cannot answer will be sent to the next nameserver in the chain. In our case it is 8.8.8.8 

Step 3

Now as you noticed in step 2 above, all external DNS queries are by default not forwarded to our dnsmasq container, unless we specify the nameserver in our query. This is not convenient. Therefore, next we look at how we can configure the coredns to forward all our external DNS queries to dnsmasq container by default. 

First, we open the coredns configmap in edit mode by running the following command. 

$ kubectl edit configmap –namespace kube-system coredns 

This will open a similar configuration as shown below on your editor. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

# Please edit the object below. Lines beginning with a ‘#’ will be ignored, 
# and an empty file will abort the edit. If an error occurs while saving this file will be 
# reopened with the relevant failures. 
# 
apiVersion: v1 
data: 
  Corefile: | 
    .:53 { 
        errors 
        health { 
           lameduck 5s 
        } 
        ready 
        kubernetes cluster.local in-addr.arpa ip6.arpa { 
           pods insecure 
           fallthrough in-addr.arpa ip6.arpa 
           ttl 30 
        } 
        prometheus :9153 
        forward . /etc/resolv.conf { 
           max_concurrent 1000 
        } 
        cache 30 
        loop 
        reload 
        loadbalance 
    } 
kind: ConfigMap 
metadata: 
  creationTimestamp: “2020-07-17T05:30:30Z” 
  name: coredns 
  namespace: kube-system 
  resourceVersion: “301258170” 
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns 
  uid: 96742cb2-14d5-40fe-9890-7b58bb1fc408 

 

[/gem_textbox]

The highlighted “forward” section in above configuration should now be modified to forward the external DNS queries to our dnsmasq container. 

After the modification the configmap should appear as shown below. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

# Please edit the object below. Lines beginning with a ‘#’ will be ignored, 
# and an empty file will abort the edit. If an error occurs while saving this file will be 
# reopened with the relevant failures. 
# 
apiVersion: v1 
data: 
  Corefile: | 
    .:53 { 
        errors 
        health { 
           lameduck 5s 
        } 
        ready 
        kubernetes cluster.local in-addr.arpa ip6.arpa { 
           pods insecure 
           fallthrough in-addr.arpa ip6.arpa 
           ttl 30 
        } 
        prometheus :9153 
        forward . 10.108.96.48 
        cache 30 
        loop 
        reload 
        loadbalance 
    } 
kind: ConfigMap 
metadata: 
  creationTimestamp: “2020-07-17T05:30:30Z” 
  name: coredns 
  namespace: kube-system 
  resourceVersion: “301258170” 
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns 
  uid: 96742cb2-14d5-40fe-9890-7b58bb1fc408 

 

 

[/gem_textbox]

Next save and exit from the editor so that above change is now reflected to coredns configmap. 

Finally, you can restart the coredns pods in kube-system namespace by deleting them. This will ensure that our modified configuration is loaded into the new instances of coredns pods that will come up after deleting the old replicas. 

 Here is the command to delete the current coredns pods. 

 $ kubectl delete pods –namespace kube-system –force –grace-period=0 $(kubectl get pods –namespace kube-system | grep coredns | awk -F ‘ ‘ ‘{print $1}’) 

 Now, let us run the same nslookup command on www.zinkworks.com from our ubuntu-debugger container to see if we are getting the expected response. 

$ kubectl exec -it ubuntu-debugger — /bin/bash 
root@ubuntu-debugger:/# nslookup example.zinkworks.com              
Server: 10.96.0.10 
Address: 10.96.0.10#53 
  
Name: example.zinkworks.com 
Address: 10.100.1.2 
  
root@ubuntu-debugger:/# nslookup www.zinkworks.com  
Server: 10.96.0.10 
Address: 10.96.0.10#53 
  
Name: www.zinkworks.com 
Address: 10.100.1.1 

As you notice in the above output now, all our external DNS queries are now first forwarded to the dnsmasq container we configured by default. We do not need to specify the dnsmasq service IP as the nameserver in our nslookup commands like we did in step 2 earlier. 

 As expected, the queries for www.zinkworks.com and example.zinkworks.com return the fake IPs we configured in dnsmasq.conf and hosts file in dnsmasq pod. 

To find out more about careers at Zinkworks click here.