Introduction to Private 5G Networks

A Private 5G Network is an enterprise-dedicated network tailored to deliver the latest advancements of 4G LTE and 5G technology and drive the digital transformation of businesses and organisations across industries.

Private 5G provides secured communications with high bandwidth, low latency, and reliable coverage to connect people, machines, and devices. Private 5G solutions are ideal for IoT-intensive applications like intelligent manufacturing and sensitive environments like ports and banks.

More securely and efficiently than public 5G and LTE networks, private 5G offers solutions that provide connections to authorised users only within the enterprise and processes the generated data locally, in isolation from the public network. Easy to deploy, operate, and scale to meet all operational needs.

Spectrum, Coverage and Speed

The range of a private 5G network can vary from a few thousand square feet to thousands of square kilometres, depending on the power of the radio transmitter, the band used, and the user's requirements. A typical 5G radio that operates on low, middle, and high bands provides the following frequency ranges:

Deployment Scenarios

The 5G system is disaggregated into independent components known as "network functions" (NF) that communicate through a standard API. These NFs are accountable for the operation of mobile networks, including authentication, IP address allocation, policy control, and user data management. Service-Based Architecture makes the 5G system very flexible and able to add new services and applications to meet the needs of any industry.

5G Architecture

Figure 1 - 5G Architecture

Control and User Plane Separation adopted in the 5G architecture allow operators to separate the 5G control function from the data forwarding function. For example, the control plane can be deployed centrally, whereas the user plane function (UPF) could be deployed flexibly at any location within the network to accommodate the various data processing requirements.

The private 5G network architecture can be deployed in different scenarios to meet each customer's needs.  We can categorise the deployment based on the level of isolation and integration with the public network into three scenarios as follows:

Isolated Private 5G                     Shared Private 5G Network

Figure 2 - Isolated Private 5G                                               Figure 3 - Shared Private 5G Network
1 - Isolated Private 5G Network

Enterprise hosts and operates the 5G network (complete set: gNB, UPF, 5GC CP, UDM, MEC), where the network is physically isolated from the public network. Despite the high cost associated with deploying this scenario, it guarantees complete data security, reduces the likelihood of a data breach, and provides ultra-low latency connections.

2 - Shared Private 5G Network

A shared private 5G network scenario uses an operator's public network to reduce installation costs. Based on the business needs, the customer can choose the proportion of components they host and manage locally, and the elements will share with the mobile carrier.

MEC and UPF may be deployed on-site on premises like smart factories, stadiums, and cinemas, enabling a private 5G architecture with minimal latency and future changes. In addition, the business owner can control the radio access network locally to allow quick and reliable connection (RAN).

3- Private 5G Over Slicing

Depending on the application’s requirements, a Radio Access Network (RAN) may be installed on-site and connected to the public network via a dedicated data slice that provides private 5G service.

Private 5G Use Cases

Smart ports and manufacturing facilities:
Connect autonomous cranes, robots, drones, legacy machines, or edge gateways within facilities to achieve industrial network service level metrics.
Enable security applications:
Connect video cameras, fingerprint scanners, face detection, and automatic licence plate readers to private networks with guaranteed and dedicated bandwidth.
Enable smart operations:
Intelligent and secured operations are enabled using various private-tailed applications such as geofencing, digital twins, mobility prediction, and task scheduling.
Zinkworks and Private 5G

Zinkworks provides a Networked OT Orchestration platform purpose-built for Industry 4.0 and private 5G. Customers can use various automation solutions and ML-based prediction models developed by Zinkworks to monitor the network's performance and manage network resources more efficiently and securely. In addition, customers can create policy and service profiles with customised bandwidth, latency, and quality of service (QoS) to meet every application's needs.

Written by Mohamed Ibrahim.


RxJava Reactive Streams

Introduction to Reactive Streams

Before we talk about what RxJava is and fully understand it we must first comprehend some concepts and principles that are behind the creation of the API. In reality, RxJava is just part of a broader project called ActiveX which applies the concepts that will be explained here not only for Java but also to other platforms such as Python, Go, Groovy, C#, and many others. It is worth mentioning that ActiveX is not the only one to implement these ideas. Spring Boot Framework also has its own implementation and is called Spring WebFlux (result of the Spring Project Reactor).

Reactive Stream is an initiative to provide a standard for asynchronous stream processing with non-blocking backpressure.

— reactive-streams.org

RxJava as well as WebFlux are implementations of the Reactive Streams. But what exactly does this statement above mean? Traditional methods when called normally get blocked until it finishes whatever it needs to do. If the method is doing only mathematical calculations or checking some logic out of their arguments the non-blocking nature of the asynchronous stream processing will not make much difference, but if we start talking about accessing the file system, save a file to some device, read information from service, or communicate to a microservice remotely that is when things start to get interesting.

A scenario particularly interesting for Reactive Streams is in a microservices environments such as cloud environments. In such architectures we have many services talking to each other and every time this communication takes place the service that initiated it will need to wait for some time until it takes some action. On top of that, the agent providing the service usually does not respond to a single client but to multiple ones. It is in this sequence of events that Reactive Streams excel!

Reactive Streams solves the problem effectively by using something called Event Loop. Every time a new request comes to the Reactive Streams the thread used by the method does not get blocked. Just after it executes the request it goes and does something else, it does not wait. Only when request is done the Event Loop adds to the queue this new event and the next available thread is processed which means, no wasted resources! The usage time of every thread is used to the fullest.

Fig. 1 – Reactive Event Loop

The no-reactive method needs to instantiate a new thread every time a new request is made which means that if you have too many simultaneous requests you can end up with multiple threads sitting there just waiting, doing nothing and consuming resources.

Last but not least, reactive streams must support backpressure. This means that the receiver (Subscriber) of a reactive stream can control the number of events it is able to process. This is useful in cases where the sender (Publisher) produces more events than the receiver can handle, and backpressure is a mechanism to allow the sender to slow down the event generation in order to allow the receiver properly to process them.

Reactive Streams can be considered an evolution of the well-known observer pattern plus the addition of functional paradigm bringing to the mix a very powerful API. This API allows for the creation of a chain of methods bringing a very declarative style of programming as well as abstracting out low-level threading, synchronization, thread-safety, concurrent data structures, etc.

Reactive Reference Implementation

As mentioned, it is not only the ReactiveX project, more specifically RxJava, that implements the Reactive Streams standards which means that you are going to find similar structures and elements in different projects although using distinct names depending on each project.

At a very high level every Reactive Stream implementation has a Publisher,  the entity that produces the data to be consumed by a Subscriber.  Another important architectural element is the Subscription. The Subscription represents the message control link between Publishes and Subscribers itself by which it gives the Subscriber the capability to inform the Publisher how much data it can handle, in other words the entity that makes backpressure possible. In addition to that, between Publisher and Subscriber, we normally also have a chain of functions, known as function chain. It is through this chain of functions where all sorts of operations are applied over the streams such as Map, Filter, FlatMap, and many more.

Fig. 2 – Reactive Streams Base Classes

Keep in mind that Reactive Streams has its style on the bases of the Functional Paradigm, and therefore, having the knowledge of concepts such as immutability, pure functions, high-order functions, etc., is essential to fully understand the RxJava and properly use its API.

Some RxJava Code at Last

I know there is a lot of information to absorb before the first lines of code, but trust me, what I presented before will save you from a lot of trouble when developing a Reactive Functional Programming API such as  RxJava.

Hello World

package rxjava.examples;

import io.reactivex.rxjava3.core.*;

public class HelloWorld {
    public static void main(String[] args) {
        Flowable.just("Hello world").subscribe(System.out::println);
    }
}

Looking at this simple hello-world code might seem odd for someone used to working with traditional Object-oriented programming (OOP) only, but now that we have set the scene for the Reactive Functional Programming on the previous sections it will be much easier to understand what is going on here.

The first thing to note is the use of the Class Flowable. It is important to remember that here everything is a constant infinite flow of data or stream, and the Flowable class represents exactly that. Even to print a single String you need to somehow provide it through a stream. In such cases, Flowable gives the just method that gives an Observable object with just one item. You can think of Observable as a Publisher class mentioned before. This means that you need to subscribe to a Subscriber to read what is coming from the streamer. Here the subscriber simply prints out whatever is coming from the stream.

This API has much more power and flexibility than shown in this simple hello-world example, but it is when dealing with millions of data that Reactive Streams approach really shines.

Of course, there is a lot more to talk about regarding RxJava I have barely scratched the surface here. Apart from Flowable and Observable base classes there are also Single, Completable and Maybe base classes to deal with more specific situations that I haven’t even touched on here in this article.

Talking about everything RxJava is able to do would take many more pages, not a simple article like this one. The goal here is to just give a high-level overview of RxJava, the main concepts behind any Reactive Streams application as well as about Reactive Functional Programming paradigm.

Final Thoughts

I hope to further explore the RXJava API but this article explains the basics for any Reactive Stream which should enable the reader to quickly understand any implementation of the Reactive Streams.

Also, this article does not present examples on how powerful Reactive Streams standard is over the traditional blocking approach. To give the readers of this article an insight of its power I conducted a small experiment where I implemented a very simple REST service using Reactive Streams versus a traditional blocking one, and the result was pretty impressive.

For closure, I will leave the reader to take their own conclusions based on the result graphics of this experiment:

Fig. 3 – Traditional Blocking API Results

Fig. 4 – Reactive Streams API Results

Written by Berchris Requiao.


Zinkworks Apprenticeship Programme – Ingrid’s Experience

Zinkworks apprenticeship programme is in association with the Fastrack into Information Technology (FIT)  programme.

Ingrid joined Zinkworks through the FIT apprentice program with Athlone Training Centre in 2021 and is about to begin her final semester. Here she describes her experience at Zinkworks thus far.

“I am making a career transition and balancing studies, work, and family time. I am a participant in the FIT apprenticeship program at Zinkworks as well as a master’s degree student in Mobile Development at PUC-Minas Brazil. Zinkworks has allowed me to put into practice everything I studied in college and the FIT course.

Since starting at Zinkworks I had a basic understanding of C and C++, but I have been learning Java on my own with the help of my colleagues. My knowledge was used within a large and complex system, and I was mentored by members of my team who gave me effective solutions, taught me how to resolve errors, and about dependencies, management, and automation tools. Team members helped me understand the project and how to find solutions to bugs during the project... As a beginner, I found the start difficult, but with everyone around me helping, I felt confident that I could make mistakes and try again.

FIT and Zinkworks encourage their apprentices to acquire industrial IT certificates during their studies. I chose to obtain the Introduction to Programming Using Java certification from Microsoft at the beginner level because I had Java experience within the company. To prepare for the certification, I used video courses at O'Reilly a subscription service that Zinkworks provides.

I’m starting to study for a Javascript Specialist certification at Intermediate Level. I don't work with Javascript currently, but my studies are more intense. Javascript can be a little challenging for some developers. Javascript is a language that can be used with a variety of frameworks and technologies. It's challenging but I love it! I've done some projects for college as my dream career is Mobile development, I must learn Frontend too. Access to O'Reilly has allowed me to read Javascript books, take some courses, practice the old questions from IT Exams and study with official CIW material.

I am very lucky to be part of Zinkworks, and FIT supports me in my professional growth. I work with many people who are willing to help and show me where I can improve every day. Thank you to everyone who is supporting me and helping me to grow every day.”

Ingrid is one of three apprentices in their final year of study. This year, in 2022, six more apprentices have joined Zinkworks and begun their work experience as part of the FIT apprenticeship program.

To find out more about the apprenticeship programme contact us at info@zinkworks.com.

Click here to learn about careers at Zinkworks.


The experience of being an outed LGBTQ+ person in tech - Meet Adheli

Software Developer Adheli Tavares speaks about her experiences being a member of the LGBTQ+ community.

1. How would you describe your journey to becoming a proud member of the LGBTQ+ community?

Complicated and complex. It’s a process that is always ongoing. There’s confusion and doubts until you find yourself comfortable in your own skin. Therapy was an ally in my process, to understand that all those feelings of inadequacy were uncalled for.

2. Have you found any differences between living in Ireland and the attitude towards the LGBTQ+ community (or you personally) versus living in other parts of the world?

Definitely. I think the main reason is that people here don’t really care.  Although I have been targeted a few times with some not very nice words, I do feel a lot more comfortable walking hand in hand with my partner here than back home.

3. How have you found the attitudes of others towards members of the community in the tech world (globally and in Ireland)?

I have met mostly queer women in the tech industry. Because it’s still a male dominant world, the behaviour I encountered the most was “so you like women then you are one of the guys” which is wrong.  This shows another level of sexism and the old stereotypes around how lesbians must look and act like a man. I have accepted this behaviour previously, mostly to feel included and avoid the mean comments, most of them hidden behind “jokes”.

4. Have you found Zinkworks to be an open and accepting place? How so?

Yes, Zinkworks has been super cool to work with. I think it kind of goes back to the fact people don’t really care as long as you’re a good person and good employee!  Acceptance and respect go along with mental health, which is another thing that Zinkworks has been the best place for regarding support.

5. What further support is needed for the LGBTQ+ community in the workplace and socially?

The “removal” of the necessity of coming out is my big dream. It is good that English is a friendly non-binary language, which helps a lot. I can say that it has taken out a lot of stress when referring to my partner and the questioning when someone decides to label themselves. I think it’s very personal how one describes themselves, then for another person, even someone they do not know to question every little detail is demeaning.

6. What can allies of the community do to support members in their workplace?

First of all, respect their identity (gender/sexuality) and not let it be something that will create a pre-judgement of their work abilities. Let them be heard. If you have doubts about how to talk to someone, ask their name and their pronouns.

When people are asking questions we can differentiate between when someone has a genuine question because they are curious and want to understand, from someone that is just lazy and expect us to be their personal LGBTQ+ dictionary.

At the end of the day, I have been through so much, like working with international teams (before moving to Ireland), learning the quality side of development, managing teams, and laughing with teammates when everything went right. Crying when things went to space. Stayed until late to fix something, or left earlier because it was too tiring. And you, a non-LGBTQ+ person might think “that sounds like some of the stuff I have done” and that’s because we are just like you. People.

Thank you to Adheli for sharing her experiences.

If you need support as part of the LGBTQ+ community you can visit here.

To read more blogs from Zinkworks employees visit here.


5 ways to improve your workspace based on science

Regardless of where we work—at home, in an office. We can all do a few simple things to our work environment to optimise our productivity. Below is a shortlist of the most effective things—none of which require purchasing any products or equipment. Anyone can use these tools to:

  • Maintain alertness and focus longer.
  • Improve posture and reduce pain (neck, back, pelvic floor, etc).
  • Tap into specific states of mind (creativity, logic, etc.) for the sake of work.

1. Sit or stand ( Or Both )

It is best to arrange your desk and workspace so that you can work sitting for some time—10-30 minutes or so for most people, and then shift to work standing for 10-30 minutes, and then go back to sitting. Research also shows that it’s a good idea to take a 5-15 minute stroll after every 45 minutes of work. There is evidence that such a sit-stand approach can reduce neck and shoulder and back pain.

2. Effects of TIME of the day has on you

We are not the same person across the different hours of the day, at least not neurochemically. Let’s call the first part of your day (~0-8 hours after waking up)

“Phase 1.” During this phase, the chemicals norepinephrine, cortisol, and dopamine are elevated in your brain and body. Alertness can be further heightened by sunlight viewing, caffeine and fasting.

Phase 1 is ideal for analytic “hard” thinking and any work that you find particularly challenging. It isn’t just about getting the most important stuff out of the way; it is about leveraging your natural biology toward the best type of work for the biological state you are in.

“Phase 2”: is ~9-16 hours after waking. At this time, serotonin levels are relatively elevated, which lends itself to a somewhat more relaxed state of being—optimal for brainstorming and creative work.

“Phase 3”: ~17-24 hours after waking up is when you should be asleep or try to sleep. During this phase, do no hard thinking or work unless, of course, you must, keep your environment dark or very dim and the room temperature low (your body needs to drop in temperature by 1-3 degrees to fall asleep and stay asleep).

3. Where your screen is and where you look ARE important

There’s a relationship between where we look and our level of alertness. When looking down toward the ground, neurons related to calm and sleepiness are activated. Looking up does the opposite. This might seem wild, but it makes sense based on the neural circuits that control looking up or down.

Standing and sitting up straight while looking at a screen or book that is elevated to slightly above eye level will generate maximal levels of alertness. To get your screen at or above eye level and not work while looking down at your screen may take a bit of configuring your workspace, but it’s worth it for the benefits to your mind and work.

4. Set your background sound

Some people like to work in silence, whereas others prefer background noise. Some kinds of background noise are particularly good for our work output. Working with white, pink, or brown noise in the background can be good for work bouts of up to 45 minutes but not for work bouts that last hours. So, use it from time to time. These are easy to find (and free) on YouTube or in various apps (search by “whitepink, or brown noise”).

Binaural beats are a neat science-supported tool to place the brain into a better state for learning. As the name suggests, binaural beats consist of one sound (frequency) being played in one ear and a different sound frequency in the other ear. It only works with headphones. Binaural beats (around 40 Hz) have been shown to increase certain aspects of cognition, including creativity and may reduce anxiety.

5. Room type can make a difference

There is an interesting effect of workspace optimization called the “Cathedral Effect,” in which thinking becomes “smaller”—more focused on analytic processing when we are in small visual fields. The opposite is also true. In short, working in high ceiling spaces elicits abstract thoughts and creativity, whereas working in low ceiling spaces promotes detailed work. Even relatively small differences (a two-foot discrepancy in ceiling height) have been shown to elicit such differences. The takeaway: consider using different locations: rooms, buildings, indoors or outdoors to help access specific brain states and the types of work they favour.

Very insightful blog post written by Release Manager,  Colm Nibbs

Click here to read more blogs from Zinkworks.


GitHub & Microsoft Teams Integration

Seeking improvement opportunities to enhance productivity and collaboration is a crucial factor when working with technology. The GitHub integration for Microsoft Teams allows developers to improve their communication by automatically posting messages about issues, pull requests, deployment status, and more. Once GitHub and Microsoft Teams platforms are linked, this allows various options, such as adding comments, closing, and reopening issues or even making pull requests, without leaving your chatbox.

A considerable amount of time might be spent by developers while communicating about code changes, monitoring issues and other GitHub-related activities. This integration improves this communication and optimises the developer’s time, while also encouraging faster discussions on code reviews. All of this is happening right in your Microsoft Teams channel, which tends to be the natural place of ideas and collaboration.

Step 1 – Installation

First, we are going to install the GitHub App in our Microsoft Teams.

  • Go to the Microsoft Teams app store and install the GitHub app or you can directly install it from here.

  • Upon installation, a welcome message is displayed.
  • Use the @GitHub handle to start interacting with the app

Step 2 – Get Started

At this stage, our Microsoft Teams and GitHub user accounts are not linked yet. The following will link the two accounts.

  • Just authenticate to GitHub using a @github signing command or try to subscribe to your repository.

  • A message “Connect GitHub account” is displayed as shown in the following image. Just click on the button to connect the GitHub account.

Once the channel is created.

  • Go to the channel and look for the GitHub icon.
    • if the icon is not visible at the bottom of the channel, you must click on the “…” and search for GitHub integration with Microsoft Teams, as shown in the following image.

  • Once GitHub is set up on Microsoft We can subscribe to the repository, as shown in the following image.

  • Once the repository is subscribed, we will receive notifications as described above.

 

This whole process should be done once. After that, we can subscribe to as many repositories as we need, following just steps 8 to 10.

 

GitHub provides a lot of features to customise your subscription and keep the team up-to-date without switching to different platforms.

For more information go through the GitHub Documentation:

https://github.com/integrations/microsoft-teams/blob/master/Readme.md

 

Click the here to read more of Zinkworks blogs.


A method to override external domain name resolution in coredns

There are occasions when you must have more control over the domain name resolution that is taking place inside a Kubernetes cluster.
For example, imagine the following scenarios.
  1. You are trying to replicate an issue that you observe in a different environment, in your dev K8s cluster. To successfully replicate the issue, you need to ensure that you use the same hostnames, application FQDNs that were used in the original environment.
  2. You are testing an application running on Kubernetes that requires access to third party external endpoints over the internet. You need to ensure that these third-party external connections are created towards some known, test services, not the actual ones.
  3. You need an alternative and a more centralized way to control the responses received to DNS queries by the application pods. (e.g. You do not like to use “hostAliases” option with K8s pods)
On all the above occasions, you can use the following method to override the responses sent by the internal Kubernetes DNS to your application pods.

Step 1

First, we are going to run a “dnsmasq” server in the same Kubernetes cluster as a pod. It will be associated with a configmap through which we can manipulate the DNS records. Other than that, the dnsmasq pod will be exposed via a clusterIP type Kubernetes service.

To run a “dnsmasq” server as a container, we can first create docker image using following Dockerfile.

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

.
FROM alpine:latest
RUN apk –no-cache add dnsmasq
VOLUME /var/dnsmasq
EXPOSE 53 53/udp
ENTRYPOINT [“dnsmasq”,”-d”,”-C”,”/var/dnsmasq/conf/dnsmasq.conf”,”-H”,”/var/dnsmasq/hosts”]
 

[/gem_textbox]

As you may notice in the ENTRYPOINT of above, we are running “dnsmasq” in foreground (-d) as well as we pass a configuration file (-C) and a hosts (-H) file to it. During the build, these files do not necessarily need to exist in the docker image. However, during the runtime we will mount relevant configuration files to /var/dnsmasq/conf/dnsmasq.conf and /var/dnsmasq/hosts location accordingly. 

Now you can build the docker image with “docker build –t dnsmasq:latest ” command. You may push it to a docker image repository that is accessible by your Kubernetes cluster, afterwards

Step 2

Now let us create a couple of configmaps to pass the dnsmasq.conf and hosts files to the running dnsmasq pod.

In below example, we are creating a fake DNS record, that resolves www.zinkworks.com to 10.100.1.1 address. Also, we will create a host’s file record that maps example.zinkworks.com to 10.100.1.2

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

 
#dnsmasq.conf file 
kind: ConfigMap 
apiVersion: v1 
metadata: 
  name: dnsmasq-conf 
  labels: 
    app: dnsmasq 
data:  
  dnsmasq.conf: | 
    address=/www.zinkworks.com/10.100.1.1 
 
#dnsmasq hosts file 
kind: ConfigMap 
apiVersion: v1 
metadata: 
  name: dnsmasq-hosts 
  labels: 
    app: dnsmasq 
data:  
  hosts: | 
  10.100.1.2 example.zinkworks.com 

 

[/gem_textbox]

You can create above configmaps in the namespace where you will run the dnsmasq pod, using “kubectl create –f“or “kubectl apply –f” commands. 

Next let us create a dnsmasq pod using the docker image built in step 1. We will also mount the configmaps created previously to /var/dnsmasq/conf/dnsmasq.conf and /var/dnsmasq/hosts as files. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

 
apiVersion: v1 
kind: Pod 
metadata: 
  labels: 
    app: dnsmasq 
  name: dnsmasq 
spec: 
  containers: 
  – image: dnsmasq:latest 
     name: dnsmasq 
     ports: 
     – containerPort: 53 
       protocol: UDP 
       name: udp-53 
    volumeMounts: 
    – name: dnsmasq-conf 
      mountPath: “/var/dnsmasq/conf” 
    – name: dnsmasq-hosts 
      mountPath: “/var/dnsmasq” 
    securityContext: 
      capabilities: 
        add: 
        – NET_ADMIN 
  dnsPolicy: None 
  dnsConfig: 
    nameservers: 
    – “8.8.8.8” 
  restartPolicy: Always 
  volumes: 
  – name: dnsmasq-conf 
    configMap: 
      defaultMode: 0666 
      name:  dnsmasq-conf 
  – name: dnsmasq-hosts 
    configMap: 
      defaultMode: 0666 
      name:  dnsmasq-hosts 

[/gem_textbox]

Above is an example pod definition to run the dnsmasq container with desired configurations. Notice how, dnsmasq-conf and dnsmasq-hosts configmaps are mounted as files to the pod. Also notice the “NET_ADMIN” capability given to the container. This will allow you to run the container with UDP port 53.  

You can create the pod by running “kubectl create –f” or “kubectl apply –f” commands on above pod definition. 

Once the dnsmasq pod is created, you can confirm if it is running fine by looking at the pod logs.

Here is an example output. In this case, it is assumed that dnsmsq pod is created in dnsmasq namespace. 

$ kubectl logs –namespace dnsmasq dnsmasq –f 
 
dnsmasq: started, version 2.86 cachesize 150 
dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-UBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-cryptohash no-DNSSEC loop-detect inotify dumpfile 
dnsmasq: reading /etc/resolv.conf 
dnsmasq: using nameserver 8.8.8.8#53 
dnsmasq: read /etc/hosts – 7 addresses 
dnsmasq: read /var/dnsmasq/hosts – 1 addresses 

Finally, we will expose the dnsmasq pod via a kubernetes service. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

 
apiVersion: v1 
kind: Service 
metadata: 
  name: dnsmasq 
  labels: 
    app: dnsmasq 
spec: 
  type: ClusterIP 
  ports: 
  – name: udp-53 
     targetPort: udp-53 
     port: 53 
     protocol: UDP 
  selector: 
    app: dnsmasq 

[/gem_textbox]

In the future steps, we may need to use the service IP assigned to the dnsmsq service to send our DNS queries to the dnsmasq container. To find the service IP of the dnsmasq service, you can run “kubectl get svc” command on the namespace where you run dnsmasq pod.  

In our example the dnsmasq service IP is 10.108.96.48. 

$ kubectl get svc –namespace dnsmasq 
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE 
dnsmasq   ClusterIP   10.108.96.48   <none>        53/UDP    15s 

 

You can further check if the domain name resolution is working as expected in the dnsmasq container by running a few nslookup commands from a different pod. 

 In below example, we are running a pod called ubuntu-debugger in the same cluster from which we can check the responses sent by the dnsmasq. It is assumed that ubuntu-debugger pod has nslookup utility. 

 $ kubectl exec -it ubuntu-debugger — /bin/bash 
root@ubuntu-debugger:/# nslookup www.zinkworks.com 
Server: 10.96.0.10 
Address: 10.96.0.10#53 

  

Non-authoritative answer: 
Name: www.zinkworks.com 
Address: 46.101.47.85 

  

root@ubuntu-debugger:/# nslookup www.zinkworks.com 10.108.96.48 
Server: 10.108.96.48 
Address: 10.108.96.48#53 

  

Name: www.zinkworks.com 
Address: 10.100.1.1 

  

root@ubuntu-debugger:/# nslookup www.google.com 10.108.96.48 
Server: 10.108.96.48 
Address: 10.108.96.48#53 

  

Non-authoritative answer: 
Name: www.google.com 
Address: 142.251.43.4 
Name: www.google.com 
Address: 2a00:1450:400f:804::2004 

Notice how in the first nslookup command for www.zinkworks.com has returned the actual public IP instead of the fake one we provided to the dnsmasq container in dnsmasq.conf file. This is because this external DNS queries are handled by the coredns and they are by default forwarded to the nameservers specified in /etc/resolv.conf file. 

 Now in the subsequent nslookup commands we specify the service IP of the dnsmasq service which is 10.108.96.48. This ensures that the DNS query is processed by our dnsmasq container directly. As you notice in the second nslookup command for www.zinkworks.com, We have received the fake IP we configured in the dnsmasq container through the dnsmasq.conf file.  

Any other query that dnsmasq service cannot answer will be sent to the next nameserver in the chain. In our case it is 8.8.8.8 

Step 3

Now as you noticed in step 2 above, all external DNS queries are by default not forwarded to our dnsmasq container, unless we specify the nameserver in our query. This is not convenient. Therefore, next we look at how we can configure the coredns to forward all our external DNS queries to dnsmasq container by default. 

First, we open the coredns configmap in edit mode by running the following command. 

$ kubectl edit configmap –namespace kube-system coredns 

This will open a similar configuration as shown below on your editor. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

# Please edit the object below. Lines beginning with a ‘#’ will be ignored, 
# and an empty file will abort the edit. If an error occurs while saving this file will be 
# reopened with the relevant failures. 
# 
apiVersion: v1 
data: 
  Corefile: | 
    .:53 { 
        errors 
        health { 
           lameduck 5s 
        } 
        ready 
        kubernetes cluster.local in-addr.arpa ip6.arpa { 
           pods insecure 
           fallthrough in-addr.arpa ip6.arpa 
           ttl 30 
        } 
        prometheus :9153 
        forward . /etc/resolv.conf { 
           max_concurrent 1000 
        } 
        cache 30 
        loop 
        reload 
        loadbalance 
    } 
kind: ConfigMap 
metadata: 
  creationTimestamp: “2020-07-17T05:30:30Z” 
  name: coredns 
  namespace: kube-system 
  resourceVersion: “301258170” 
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns 
  uid: 96742cb2-14d5-40fe-9890-7b58bb1fc408 

 

[/gem_textbox]

The highlighted “forward” section in above configuration should now be modified to forward the external DNS queries to our dnsmasq container. 

After the modification the configmap should appear as shown below. 

[gem_textbox content_background_style="contain" content_text_color="#30414d" border_color="#ff0041" border_width="3" padding_top="3px" padding_bottom="0px"]

# Please edit the object below. Lines beginning with a ‘#’ will be ignored, 
# and an empty file will abort the edit. If an error occurs while saving this file will be 
# reopened with the relevant failures. 
# 
apiVersion: v1 
data: 
  Corefile: | 
    .:53 { 
        errors 
        health { 
           lameduck 5s 
        } 
        ready 
        kubernetes cluster.local in-addr.arpa ip6.arpa { 
           pods insecure 
           fallthrough in-addr.arpa ip6.arpa 
           ttl 30 
        } 
        prometheus :9153 
        forward . 10.108.96.48 
        cache 30 
        loop 
        reload 
        loadbalance 
    } 
kind: ConfigMap 
metadata: 
  creationTimestamp: “2020-07-17T05:30:30Z” 
  name: coredns 
  namespace: kube-system 
  resourceVersion: “301258170” 
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns 
  uid: 96742cb2-14d5-40fe-9890-7b58bb1fc408 

 

 

[/gem_textbox]

Next save and exit from the editor so that above change is now reflected to coredns configmap. 

Finally, you can restart the coredns pods in kube-system namespace by deleting them. This will ensure that our modified configuration is loaded into the new instances of coredns pods that will come up after deleting the old replicas. 

 Here is the command to delete the current coredns pods. 

 $ kubectl delete pods –namespace kube-system –force –grace-period=0 $(kubectl get pods –namespace kube-system | grep coredns | awk -F ‘ ‘ ‘{print $1}’) 

 Now, let us run the same nslookup command on www.zinkworks.com from our ubuntu-debugger container to see if we are getting the expected response. 

$ kubectl exec -it ubuntu-debugger — /bin/bash 
root@ubuntu-debugger:/# nslookup example.zinkworks.com              
Server: 10.96.0.10 
Address: 10.96.0.10#53 
  
Name: example.zinkworks.com 
Address: 10.100.1.2 
  
root@ubuntu-debugger:/# nslookup www.zinkworks.com  
Server: 10.96.0.10 
Address: 10.96.0.10#53 
  
Name: www.zinkworks.com 
Address: 10.100.1.1 

As you notice in the above output now, all our external DNS queries are now first forwarded to the dnsmasq container we configured by default. We do not need to specify the dnsmasq service IP as the nameserver in our nslookup commands like we did in step 2 earlier. 

 As expected, the queries for www.zinkworks.com and example.zinkworks.com return the fake IPs we configured in dnsmasq.conf and hosts file in dnsmasq pod. 

To find out more about careers at Zinkworks click here.


Retrospectives: The Fuel to Continuous Improvement

What is Continuous Improvement?

Continuous Improvement is often an abstract and vague term to describe better ways of working. Within the context of business and technology, Continuous Improvement seeks to enhance every aspect of value in a team’s processes and products. By reducing waste, reducing burden, and increasing consistency, teams boost the value they produce towards their stakeholders. It is impossible to remove all waste, burden and inconsistency completely from the team’s work, but teams should work to minimize each one incrementally. This incremental change is what we refer to as Continuous Improvement. Let’s look at Continuous Improvement and how we can utilize its main driver – the “Retrospective”.

[gem_divider margin_top="20"]

Continuous Improvement Lifecycle

[gem_divider margin_top="50"]

Continuous Improvement in Sport

Let’s step back for a minute and look at Continuous Improvement in the world of sport. In 2002, Sir Dave Brailsford joined as head of the British Cycling team. Before Sir Dave’s leadership, British Cycling had won just one gold medal in almost 80 years. In the 2008 Beijing Olympics, Sir Dave led the British track cycling team to 7 out of 10 gold medals! After achieving this staggering result, he turned his focus towards “Team Sky” – Britain’s first-ever pro cycling team. He led Team Sky to 6 out of the last 9 Tour De France events, an impressive feat to say the least.

How did Sir Dave achieve this? Sir Dave held an MBA and used methods of Continuous Improvement when coaching his teams. He looked for small things to improve in how the team worked together, referring to these improvements as “marginal gains”.

“Forget about perfection; focus on progression and compound the improvements.” – Sir Dave Brailsford

The team hired a wind tunnel to understand and experiment with aerodynamics. They painted their truck’s floor white to detect and reduce dirt on the bikes. They even brought their own mattresses and pillows to each hotel to get a good night’s sleep! By regular retrospection, the team encouraged ownership and, eventually, everyone started to come up with their ideas for improvements which, in the end, led to their huge success. This is the power of Continuous Improvement. “Marginal gains” made frequently in the short term encourages growth and prosperity in the long term.

Continuous Improvement is an Engine

Think of Continuous Improvement as an engine. The purpose of an engine is to drive something forward in a given direction. Continuous Improvement should be the engine that drives your team forward to deliver more and more value each iteration.

Retrospectives are the Fuel

All engines need fuel. Without fuel, the engine slows down and stops. If Continuous Improvement is your team’s engine, then the Retrospective (henceforth referred to as the ‘Retro’) is your team’s fuel. Like an engine without fuel, Continuous Improvement without Retros slows down and stops. A car that does not move forward becomes a stagnant, empty shell that is only nice to look at (many people own such cars!). We do not want to be stagnant teams, empty shells with nothing to drive us forward. We want to be innovative and come up with creative solutions to our own problems.

Forward progress gives us satisfaction in our jobs and without that sense of progress, we become scattered and burnt out. But to achieve progress and forward motion, we need fuel. We need Retros to provide fuel to our engine.

What is a Retro?

Team Meeting

The Retro is a team meeting where space is created for everyone to learn and brainstorm ideas. It is important that the whole team contributes towards the Retro. A Retro in which only part of the team speaks up is not a good Retro. For example, it is common for the loudest person on the team to speak the most in Retros. Or maybe people do not want to speak up because the most experienced person will argue them down. But the Retro is one of the few places where all members of the team come together as equals. The Retro facilitator should work to ensure all members contribute and have their voices heard.

Regular Schedule

If the Retro is the fuel to your team’s engine, then it needs to be topped up frequently. Therefore, Retros should be scheduled regularly. They should be monthly at a minimum, but ideally more often. In between Retros, the team can test their experiment ideas and gather feedback for the next Retro. The Retro should also close out your team’s iteration period. For example, if your team works in cycles of two weeks, the Retro should be the last event of the cycle. By taking a Retro at the end of the cycle, it provides the team with the opportunity to plan for the next cycle. If your team has no cycle time, then pick one – teams always work best with defined timelines.

Opportunity

Retros provide the opportunity for the team to give feedback about themselves, their processes, and their product. Whether you have been part of the team for a day or a decade, your opinion is equally valid in your team’s Retro.

If you are new to the team, this is your opportunity to speak up about the things you find difficult and frustrating. Are you happy with the onboarding process? Are you invited to pair program on tasks? Do you understand your objectives as part of the larger team? As a new team member, it is up to both you and the team to make sure you are integrated and valuable to the people around you. If something is difficult as a new member, then have the courage to speak up and be honest in your team’s Retro.

If you have been with your team for a longer period, you likely have valuable insights into the biggest problems you and your team face. Is the team’s quality sufficient? Is there enough innovation and experiments happening in the team? Are you excited, bored, or burnt out with your work? With experience comes more expectations, but that does not mean you should expect to be stressed or under pressure all the time. Use the team’s Retro to voice your concerns with your teammates. They just might surprise you!

Experiment

“What experiment do we want to run next?”

This is the question the team should work to answer in their Retro. Simply having a meeting called “Team Retro” will not achieve improvements without a solid outcome and plan for what happens afterwards. The real work is done between Retros where the chosen experiments are tried and tested. The Retro allows the team to learn about problems, identify solutions and pick an experiment going forward using methods like the 3 questions: “What went well? What did not go well? What should we improve?”

Think back to the experiments Sir Dave ran with his teams. White paint, mattresses, hand washing. These are not particularly profound ideas. I am sure there were many experiments and ideas Sir Dave’s team tried which came to no avail. However, each time they tested something out, whether it succeeded or failed, the team learned something new. There was no single experiment that won gold for the team. Each success built on the last until, eventually, enough 1% marginal gains were made to achieve the gold.

A good Retro experiment will like the British cycling team’s experiments – short, actionable, and valuable.

It needs to be short so the team can commit to it on top of their other work. “Pair program on critical tasks” is better than “Re-architect our software to support faster deployment”.

It needs to be actionable so the team do not jump through hoops to complete it. “Order our stand-up tasks by priority instead of person” is better than “Suggest a new tool for ticket tracking to the organisation”.

And it needs to be valuable so that the team can add it to their ways of working. “Scrum Master to close stand-up after 15 minutes” is more valuable than “Create unit tests for this deprecated piece of code”.

Benefits of Retros

Develop Agency

Agency is a psychological term that means ‘I can act on my behalf. A team with a sense of agency can act independently to control and change their surroundings. By raising their problems in the Retro, the team empowers themselves to come up with their own solutions. They develop ownership over their environment and push for change where change is needed.

Learn the Mood

“How do you feel?” “…Fine, and you?”

This is a common response when you throw the ‘f’ word at someone (feelings). While we don’t like to share our feelings with other people, it can be a valuable piece of feedback for a team to understand each other’s moods. In my team, we start off the Retro with a quick word from everyone on their mood in the past few weeks. We have yet to hear the word ‘fine’. Usually we hear ‘stressed’, ‘anxious’, ‘messy’ or ‘productive’, ‘fun’ and ‘exciting’.

Each answer gives a good insight into how the team’s mood is. A positive overall answer indicates the team is doing something well and should continue to do so. A negative reaction indicates something needs to change and the retro should then focus on alleviating those negative points going forward.

Brainstorm Ideas

Any good team understands the need to innovate. Teams can only make Continuous Improvements if they have created the environment to brainstorm and innovate. Their Retro facilitates such an environment where ideas, both good and bad, are welcome. Without creative ideas, we cannot innovate. Without innovation, we cannot discover those marginal gains Sir Dave talked about. The Retro provides the space to identify these 1% improvements we can bring to the team. If each Retro facilitates just a 1% improvement to your team, imagine the massive changes that could happen over a year.

What Next?

First things first – set up regular Retros with your team if you are not already doing so. Within each Retro, focus on a particular area the team would like to improve. This could be quality, bugs, meetings, processes, teamwork or anything else the team thinks is a priority. For each Retro, focus on answering the question: “What experiment do we want to run next?” Once the team picks an experiment to focus on, plan it into the next iteration and track whether it succeeds or fails.

Remember, the team’s retro does not have to be a complicated or lengthy meeting. However, it does need to be taken regularly to yield those marginal gains and fuel your team’s Continuous Improvement.

You can find more of David at www.dcaulfield.com.

 

To find out about our open positions and how you can join the Zinkworks team visit our careers page.


Zinkworks Education Programme - Berchris's Experience

Senior Associate Berchris describes his experience furthering his education through the further education programme here at Zinkworks:

“Before I joined Zinkworks I was always interested in doing a master’s degree in my field but found it hard to get the time and support to do so knowing the costs would also be high. At the time of joining Zinkworks, the company did not have any further education program in place, however, once I decided to pursue the course, I asked the company for support with this. I didn’t know what to expect as there was nothing established in terms of support for people returning to education within the company, I was surprised and happy to find out that Zinkworks were happy to give all the support required, including financial support. I was very excited about this as it showed how the company care for its employees as I was only with Zinkworks for a little more than a year at this time, I found their attitude so amazing!

A few weeks later I started the MSc. In Software Engineer course in the Technological University of the Shannon. Doing this course and working at the same time has been a real challenge for me especially as I need time to spend with my family also. I must say that it would have been harder without the support of Zinkworks. They have been very flexible when I would need some time off for study or exams or when I would need to attend classes. I am very close to the end of the course now; I have finished all the subjects and I am about to deliver the final project. I’m relieved and happy to be completing the course, I am also very grateful to work in a company that is always willing to support their employees in their path to progress their career through learning. Thank you Zinkworks for all your help so far.”

Since Berchris started his further education endeavour Zinkworks has employed an extensive employee-driven education programme as development and learning is at the core of Zinkworks values.

If joining the Zinkworks team and our further education programme is of interest to you, contact us to find out more.

To find out more from Berchris and read some of his work visit his Github and Medium.com accounts.


image of Declan

Changing Careers - Declan's Experience

Here is what Scrum Master Declan McDaid has to say about his experience changing his career path to join the tech world after 14 years –

“I started my career as a Concrete finisher after I completed my Leaving Certificate and having worked 14 years in the building industry Ireland suffered a recession. As a result, I was made redundant, soon after I started working in a bookmaker’s shop. While working in the bookmakers I somehow became the IT guy for the company (turn it off and turn it on again). I really liked the troubleshooting and with some persuading from my wife, I decided to pursue a career in IT.

In 2012, I looked at a few different courses and settled for a level 6 in IT support in LYIT. The reason for choosing this course was that it was 18 months with a 6-months work placement in the middle. Due to not being in education for 16 years I thought this would be the best fit for me. During the last semester,  LYIT was offering a level 9 DevOps course for the first time. I applied for a place on this and was successful. Having not heard of DevOps until this I was curious to learn more.

The course enforced DevOps and the agile methodology, work placement was a mandatory part of the course. On completion of my work placement, I was offered a development role with the company.  After 1 year with that team, I joined a new team working with data integration team that was tasked with making data available from microservices to the current mainframe system. While I was happy doing data integration work, I still had a keen interest in a career in DevOps.

I heard about (Neueda Technologies Athlone) Zinkworks through a current Zinkworks employee and they were recruiting engineers. I applied through the website. The recruitment process was a pleasant experience. I was successful with the interview and was offered a role. I was still apprehensive about leaving TCS but thankfully Orlagh reassured me about it being a good place to work and after 6 or so months here I’d have to say it has been the right choice.

I am currently a Scrum Master for a team working on deploying applications on AWS with the latest technologies. Aileen organised Scrum Master training for me a few weeks after joining which I completed, and I am now a certified Scrum Master with the Scrum Alliance. After 6 or so months here I’d have to say it has been the right choice. It has been an educational and enjoyable experience (can’t wait for the next company outing) to date, to make it even better I am now the proud owner of a Mac book which I won from Zinkworks on a draw which was held for all employees who have had a referral in 2021.”

 

Visit our careers page if joining the Zinkworks team is of interest to you!