Local Application Development on Kubernetes with Gefyra

Local Application Development on Kubernetes with Gefyra

In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container running on the local Docker. Thanks to that we may leverage the single development cluster across multiple developers at the same time.

If you are looking for similar articles in the area of Kubernetes app development you can read my post about Telepresence and Skaffold. Gefyra is the alternative to Telepresence. However, there are some significant differences between those two tools. Gefyra comes with Docker as a required dependency., while with Telepresence, Docker is optional. On the other hand, Telepresence uses a sidecar pattern to inject the proxy container to intercept the traffic. Gefyra just replaces the image with the “carrier” image. You can find more details in the docs. Enough with the theory, let’s get to practice.

Prerequisites

In order to start the exercise, we need to have a running Kubernetes cluster. It can be a local instance or a remote cluster managed by the cloud provider. In this exercise, I’m using Kubernetes on the Docker Desktop.

$ kubectx -c
docker-desktop

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev branch. Then you should just follow my instructions 🙂

Install Gefyra

In the first step, we need to install the gefyra CLI. There are the installation instructions for the different environments in the docs. Once you install the CLI you can verify it with the following command:

$ gefyra version
[INFO] Gefyra client version: 1.1.2

After that, we can install Gefyra on our Kubernetes cluster. Here’s the command for installing on Docker Desktop Kubernetes:

$ gefyra up --host=kubernetes.docker.internal

It will install Gefyra using the operator. Let’s verify a list of running pods in the gefyra namespace:

$  kubectl get po -n gefyra
NAME                               READY   STATUS    RESTARTS   AGE
gefyra-operator-7ff447866b-7gzkd   1/1     Running   0          1h
gefyra-stowaway-bb96bccfd-xg7ds    1/1     Running   0          1h

If you see the running pods, it means that the tool has been successfully installed. Now, we can use Gefyra in our app development on Kubernetes.

Use Case on Kubernetes for Gefyra

We will use exactly the same set of apps and the use case as in the article about Telepresence and Skaffold. Firstly, let’s analyze that case. There are three microservices: first-servicecaller-service and callme-service. All of them expose a single REST endpoint GET /ping, which returns basic information about each microservice. In order to create applications, I’m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service is calling the endpoint exposed by the caller-service. Then the caller-service is calling the endpoint exposed by the callme-service. Of course, we are going to deploy all the microservices on the Kubernetes cluster.

Now, let’s assume we are implementing a new version of the caller-service. We want to easily test with two other apps running on the cluster. Therefore, our goal is to forward the traffic that is sent to the caller-service on the Kubernetes cluster to our local instance running on our Docker. On the other hand, the local instance of the caller-service should call the endpoint exposed by the instance of the callme-service running on the Kubernetes cluster.

kubernetes-gefyra-arch

Build and Deploy Apps with Skaffold and Jib

Before we start development of the new version of caller-service we will deploy all three sample apps. To simplify the process we will use Skaffold and Jib Maven Plugin. Thanks to that you deploy all the using a single command. Here’s the configuration of the Skaffold in the repository root directory:

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}

For more details about the deployment process, you may refer once again to my previous article. We will deploy apps in the demo-1 namespace. Here’s the skaffold command used for that:

$ skaffold run --tail -n demo-1

Once you run the command you will deploy all apps and see their logs in the console. These are very simple Spring Boot apps, which just expose a single REST endpoint and print a log message after receiving the request. Here’s the @RestController of callme-service:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = 
       LoggerFactory.getLogger(CallmeController.class);

   @Autowired
   BuildProperties buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      return "I'm callme-service " + version;
   }
}

And here’s the controller of caller-service. We will modify it during our development. It calls the endpoint exposed by the callme-service using its internal Kubernetes address http://callme-service:8080.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(CallerController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Here’s a list of deployed apps in the demo-1 namespace:

$ kubectl get deploy -n demo-1              
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
caller-service   1/1     1            1           68m
callme-service   1/1     1            1           68m
first-service    1/1     1            1           68m

Development on Kubernetes with Gefyra

Connect to services running on Kubernetes

Now, I will change the code in the CallerController class. Here’s the latest development version:

kubernetes-gefyra-dev-code

Let’s build the app on the local Docker daemon. We will leverage the Jib Maven plugin once again. We need to go to the caller-service directory and build the image using the jib goal.

$ cd caller-service
$ mvn clean package -DskipTests -Pjib jib:dockerBuild

Here’s the result. The image is available on the local Docker daemon as caller-service:1.1.0.

After that, we may run the container with the app locally using the gefyra command. We use several parameters in the command visible below. Firstly, we need to set the Docker image name using the -i parameter. We simulate running the app in the demo-1 Kubernetes namespace with the -n option. Then, we set a new value for the environment variable used by the into v2 and expose the app container port outside as 8090.

$ gefyra run --rm -i caller-service:1.1.0 \
    -n demo-1 \
    -N caller-service \
    --env VERSION=v2 \
    --expose 8090:8080

Gefyra starts our dev container on the on Docker:

docker ps -l
CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS              PORTS                    NAMES
7fec52bed474   caller-service:1.1.0   "java -cp @/app/jib-…"   About a minute ago   Up About a minute   0.0.0.0:8090->8080/tcp   caller-service

Now, let’s try to call the endpoint exposed under the local port 8090:

$ curl http://localhost:8090/caller/ping
I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Here are the logs from our local container. As you see, it successfully connected to the callme-service app running on the Kubernetes cluster:

kubernetes-gefyra-docker-logs

Let’s switch to the window with the skaffold run --tail command. It displays the logs for our three apps running on Kubernetes. As expected, there are no logs for the caller-service pod since traffic was forwarded to the local container.

kubernetes-gefyra-skaffold-logs

Intercept the traffic sent to Kubernetes

Now, let’s do another try. This time, we will call the first-service running on Kubernetes. In order to do that, we will enable port-forward for the default port.

$ kubectl port-forward svc/first-service -n demo-1 8091:8080

We can the first-service running on Kubernetes using the local port 8091. As you see, all the calls are propagated inside the Kubernetes since the caller-service version is v1.

$ curl http://localhost:8091/first/ping 
I'm first-service v1. Calling... I'm caller-service v1. Calling... I'm callme-service v1

Just to ensure let’s switch to logs printed by skaffold:

In order to intercept the traffic to a container running on Kubernetes and send it to the development container, we need to run the gefyra bridge command. In that command, we have to set the name of the container running in Gefyra using the -N parameter (it was previously set in the gefyra run command). The command will intercept the traffic sent to the caller-service pod (--target parameter) from the demo-1 namespace (-n parameter).

$ gefyra bridge -N caller-service \
    -n demo-1 \
    --port 8080:8080 \
    --target deploy/caller-service/caller-service

You should have a similar output if the bridge has been successfully established:

Let’s call the first-service via the forwarded port once again. Pay attention to the number of the caller-service version.

$ curl http://localhost:8091/first/ping
I'm first-service v1. Calling... I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Let’s do a double in the logs. Here are the logs from Kubernetes. As you see, there no caller-service logs, but just for the first-service and callme-service.

Of course, the request is forwarded to caller-service running on the local Docker, and then caller-service invokes endpoint exposed by the callme-service running on the cluster.

Once we finish the development we may remove all the bridges:

$ gefyre unbridge -A

2 COMMENTS

comments user
Adam Ostrožlík

These technologies like Skaffold, Telepresence and Geryfa are very nice but I still need to find a reason why to locally develop against k8s. As a developer my local environment looks like this: I have containers for external systems (databases, queues, caches, …) and I only care about development of an application and writing a dockerfile. That dockerfile is the only thing what my operation team wants from me and I have to be sure that the container built from my image simply works. The need to develop against local k8s does not make sense to me, maybe only in situations where you are also an operations guy managing the cluster too. Am I seeing it wrong?

    comments user
    piotr.minkowski

    The answer probably is: it depends. It depends on your app, on the approach in your org etc. Kubernetes may addresses many challenges in your org or on the other hand it may just exists and may just be used to run containers ewith apps.

Leave a Reply