Integrating Kong Mesh with Styra DAS

In this blog, you will learn how to add a policy in Styra DAS to Integrate Kong Mesh With Styra DAS.

What Is Kong Mesh?

Kong Mesh is an enterprise-grade service mesh that runs on both Kubernetes and VMs on any cloud. Built on top of CNCF’s Kuma and Envoy and focused on simplicity, Kong Mesh enables the microservices transformation with:

  • Out-of-the-box service connectivity and discovery.
  • Zero-trust security.
  • Traffic reliability.
  • Global observability across all traffic, including cross-cluster deployments.

Kong Mesh extends Kuma and Envoy with enterprise features and support while providing native integration with Kong Gateway for a full-stack connectivity platform for all of your services and APIs, across every cloud and environment.

Kong Mesh provides a unique combination of strengths and features in the service mesh ecosystem, specifically designed for the enterprise architect, including:

  • Universal support for both Kubernetes and VM-based services.
  • Single and Multi-Zone deployments to support multi-cloud and multi-cluster environments with global/remote control plane modes, automatic Ingress connectivity, and service discovery.
  • Multi-Mesh to create as many service meshes as we need, using one cluster with low operational costs.
  • Easy to install and use and turnkey, by abstracting away all the complexity of running a service mesh with easy-to-use policies for managing services and traffic.
  • Full-Stack Connectivity by natively integrating with Kong and Kong Gateway for end-to-end connectivity that goes from the API gateway to the service mesh.
  • Powered by Kuma and Envoy to provide a modern and reliable CNCF open-source foundation for an enterprise service mesh.

Why Kong Mesh?

Leading organizations are looking to service meshes to address these challenges in a scalable and standardized way. With a service mesh, you can:

  • Ensure service connectivity, discovery, and traffic reliability: Apply out-of-box traffic management to intelligently route traffic across any platform and any cloud to meet expectations and SLAs.
  • Achieve Zero-Trust Security: Restrict access by default, encrypt all traffic, and only complete transactions when identity is verified.
  • Gain Global Traffic Observability: Gain a detailed understanding of your service behavior to increase application reliability and the efficiency of your teams.

Kong Mesh is the universal service mesh for enterprise organizations focused on simplicity and scalability with Kuma and Envoy. Kong’s service mesh is unique in that it allows you to:

Start, secure, and scale with ease:

  • Deploy a turnkey service mesh with a single command.
  • Group services by attributes to efficiently apply policies.
  • Manage multiple service meshes as tenants of a single control plane to provide scale and reduce operational costs.

Run anywhere:

  • Deploy the service mesh across any environment, including multi-cluster, multi-cloud, and multi-platform.
  • Manage service meshes natively in Kubernetes using CRDs, or start with a service mesh in a VM environment and migrate to Kubernetes at your own pace.

Connect services end-to-end:

  • Integrate into the Kong Gateway platform for full-stack connectivity, including Ingress and Egress traffic for your service mesh.
  • Expose mesh services for internal or external consumption and manage the full life cycle of APIs.

Kuma is an underlying runtime, with Kong Mesh, you can easily support multiple clusters, clouds, and architectures using the multi-zone capability that ships out of the box. This — combined with multi-mesh support — lets you create a service mesh powered by an Envoy proxy for the entire organization in just a few steps. You can do this for both simple and distributed deployments, including multi-cloud, multi-cluster, and hybrid Kubernetes/VMs:

Kong Mesh can support multiple zones (like a Kubernetes cluster, VPC, datacenter, etc.) together in the same distributed deployment. Then, you can create multiple isolated virtual meshes with the same control plane in order to support every team and application in the organization.

Kong Mesh can support multiple zones (like a Kubernetes cluster, VPC, datacenter, etc.) together in the same distributed deployment. Then, you can create multiple isolated virtual meshes with the same control plane in order to support every team and application in the organization.

Kong Mesh Architecture :

Kong Mesh system type helps you manage the ingress and egress network traffic permitted within your OPA-integrated Kong Mesh. For example, permit egress traffic only to a predefined collection of endpoints, to minimize the risk of data exfiltration, and implement microservice API authorization.

Ingress Traffic:

Kong Mesh Architecture for Ingress traffic (https://docs.styra.com/img/kongmesh-opa-das-ingress.png)

Egress Traffic:

Kong Mesh Architecture for Egress traffic (https://docs.styra.com/img/kongmesh-opa-das-egress.png)

Prerequisites:

Styra DAS Account, you can sign up for a Free Account here.

Now let us login into Styra DAS and create a Kong Mesh system.

Steps:

Create a Kong Mesh System.

Install and Configure Kong Mesh System.

Deploy a Sample Application.

Audit the Decisions.

Author your Policy.

Publish your Policy.

Modify and test your policy.

1. Create a Kong Mesh system

A system is Styra’s core unit for policy authoring, validation, and distribution.

Systems are displayed on the left side of the navigation panel. To add a new system, click the + next to SYSTEMS on the left side of the navigation panel.

System Type

2. Install and configure the System

Before you begin:

create a new cluster on your system ( Refer to k3d )

Install Kong Mesh on your system (Refer to kongmesh )

Commands to Install Kong Mesh In your System:

curl -L https://docs.konghq.com/mesh/installer.sh | sh -./kumactl install control-plane | kubectl apply -f -./kuma-cp run

After installing, we can check the Kong Mesh Dashboard with the IP Address along with the port number 5681.

Kong Mesh Dashboard

Kong Mesh includes OPA natively, so you just need to configure Kong Mesh to use OPA and for OPA to connect to DAS.

Additionally, you will use the Styra Local Control Plane (SLP) to act as a cluster-level OPA bundle cache to help with cold restarts.

Kong Mesh ▸ Settings tab ▸ Install

FYI, the token is rotated for security reasons.

Follow all 3 instructions to accomplish the following tasks:

1. Configure the Kong Mesh to connect to DAS.

2. Enable sidecar-injection on default namespace so that Kong Mesh dataplane is injected into each workload.

3. Install the Styra Local Plane (SLP).

Verify SLP pod is in a running state before moving to the next step.

kubectl get pods

Active Pods

3. Deploy a sample application:

Download a sample app deployment and install it.

curl -H ‘Authorization: Bearer MOJ_e1_XXXXXXXXXXXXXXXX-q18HWPqilkzk4lXXXXXXXXXXXXjsf8OfHkqi’ https://n3mc5c.svc.styra.com/v1/system-types/template.kong-mesh:1.0/assets/quickstart/example-app.yaml | kubectl apply -f -

Sample App Application:

The sample app consists of two deployments: client-load and example-app.

Example-app:

example-app is a simple HTTP web server that allows employees of a hypothetical organization to obtain salary details via the API /finance/salary. It also exposes HR information via the API /hr/dashboard.

Client-load:

client-load is a simple shell script that generates pre-configured HTTP GET requests to test the behavior of the deployed policy.

To Check the app loaded correctly use the Command:

kubectl get pods

Displaying Pods which are active state

4. Audit the decisions:

Remember that the client-load app is regularly running API requests on the application, and the Kong Mesh is sending each request to OPA for an authorization decision. To see what decisions OPA has made, you can look at the decision log.

Start by navigating to the Decisions plane.

Kong Mesh ▸ Decisions

Here, you see the record of the two (or more) decisions made by an OPA linked to the Kong Mesh system.

You can inspect the inputs provided as well as the decision made and other metadata by toggling the arrow button next to one of the decisions.

Verify that everything is allowed. This is because we have populated egress/ingress/app policy with default rule as allowed. This can be changed by authoring your own policies.

5. Author your policy:

Your system’s policies are contained as nested documents in the inventory.

Start by navigating to the Ingress policy, which allows you to control the traffic entering each microservice.

Kong Mesh >> policy >>ingress >> rules.rego

package policy.ingressdefault allow = false# allow /finance/salary/{user} ingress
allow {
  some username
  input.attributes.request.http.method == "GET"
  input.parsed_path = ["finance", "salary", username]
}
Policy for Ingress Rule

This policy lets you allow/deny HTTP API requests to your application.

This policy lets you allow the ingress connections on endpoint “/finance/salary{user}” in example-app, all other ingress requests are denied.

Before you deploy it, see if it’s doing what you want. Click on validate. The Validate button will tell you what percentage of past decisions will be changed by your new policy.

Validate

You should see about 5% of previously allowed decisions are denied by your new policy.

6. Publish your policy:

Now you’re confident in your policy change, go ahead and publish it. Published policies are enforced by OPA as soon as OPA syncs with DAS.

Publish

You will see the toolbar shift from DRAFT to SYSTEM and the DRAFT tag disappears in the inventory.

Go look at the decisions again and see what changed (it might take a minute for OPA to download the policy and another minute for the client to send new requests).

Remember that the sample code you are running periodically runs HTTP requests against the example-app, each of which causes OPA to make an authorization decision.

Example 1: Alice runs a GET on /finance/salary/alice and is allowed to read her own salary (with status code 200)

Example 2: Alice runs a GET on /hr/dashboard and is stopped from reading HR information (with status code 403)

Output :

Allowed and Denied Decisions for the Rules

7. Modify and test your policy

Now imagine you want to allow inbound requests on “/hr/dashboard” endpoint in the example-app. So navigate back to the Ingress policy.

Kong Mesh >> policy >> ingress >> rules.rego

Change the Rego to allow requests to “/hr/dashboard”.

Replace policy:

package policy.ingressdefault allow = false
allow {
  some username
  input.attributes.request.http.method == "GET"
  input.parsed_path = ["finance", "salary", username]
}allow {
  input.attributes.request.http.method == "GET"
  input.parsed_path = ["hr", "dashboard"]
}
Policy for Ingress Rule

Before you publish the rule, you want to know what impact it will have. Run Validate again to check that the /hr/dashboard denials are now allowed.

Validate

When you’re ready, publish your modified policy.

Publish

Go to your decision logs and check the changes in decisions.

See that your new rule would allow ingress requests to host “/hr/dashboard” as well.

Output:

Allowed Decisions

Congrats!

Now, you have learned the following about Kong Mesh with OPA and Styra.

Summary :

Hence, we added Kong Mesh system in Styra DAS.

Enable sidecar-injection on default namespace so that Kong Mesh dataplane is injected into each workload and Installed the Styra Local Plane (SLP) to Integrate Kong mesh with Styra DAS.

Also, we have deployed a Sample Application.

Finally, we added policies, Enforced, Validated, and published the ingress policies on Styra DAS.

  • The OPA-Kong Mesh integration gives you fine-grained access control over microservice API authorization.
  • You can use Styra to write policies, distribute policies to OPA, and manage the decisions made by OPA.

References :

  1. Kong mesh
  2. Styra Academy