Hosted Solutions With Hosted Solutions, any given software is completely managed by the provider. The user pays hosting and management charges. Some of the vendors providing hosted solutions for Kubernetes are:
With enterprises containerizing their applications and moving them to the cloud, there is a growing demand for container orchestration solutions. While there are many solutions available, some are mere re-distributions of well-established container orchestration tools, enriched with features and, sometimes, with certain limitations in flexibility.
Although not exhaustive, the list below provides a few different container orchestration tools and services available today:
One of the common question in Information Technology World is: What is the relation within Public/Private key and Encryption / Decryption? How do we use public / private key to encrypt/ decrypt? How do we use public/private key for digital signature?
Real world applications for Digital Certificates
So far we have briefly illustrated the theory behind the Digital Certificate and its role in the deliverance of PKI. The following pages now look at the practicalities of using a Digital Certificate, where to find them on your PC, and what they actually look like.
Using Digital Certificates to deliver the 5 primary security functions
The CA attests to the identity of the Certificate applicant when it signs the Digital Certificate.
The Public Key within the Digital Certificate is used to encrypt data to ensure that only the intended recipient can decrypt and read it.
By Digitally Signing the message or data, the recipient has a means of identifying any tampering made on the signed message or data.
A signed message proves origin, as only the sender has access to the Private Key used to sign the data.
Access Control may be achieved through use of the Digital Certificate for identification (and hence the replacement of passwords etc). Additionally, as data can be encrypted for specific individuals, we can ensure that only the intended individuals gain access to the information within the encrypted data.
Figure 1 illustrates the service mesh concept at its most basic level. There are four service clusters (A-D). Each service instance is colocated with a sidecar network proxy. All network traffic (HTTP, REST, gRPC, Redis, etc.) from an individual service instance flows via its local sidecar proxy to the appropriate destination. Thus, the service instance is not aware of the network at large and only knows about its local proxy. In effect, the distributed system network has been abstracted away from the service programmer.
The data plane
In a service mesh, the sidecar proxy performs the following tasks:
Service discovery: What are all of the upstream/backend service instances that are available?
Health checking: Are the upstream service instances returned by service discovery healthy and ready to accept network traffic? This may include both active (e.g., out-of-band pings to a /healthcheck endpoint) and passive (e.g., using 3 consecutive 5xx as an indication of an unhealthy state) health checking.
Routing: Given a REST request for /foo from the local service instance, to which upstream service cluster should the request be sent?
Load balancing: Once an upstream service cluster has been selected during routing, to which upstream service instance should the request be sent? With what timeout? With what circuit breaking settings? If the request fails should it be retried?
Authentication and authorization: For incoming requests, can the caller be cryptographically attested using mTLS or some other mechanism? If attested, is the caller allowed to invoke the requested endpoint or should an unauthenticated response be returned?
Observability: For each request, detailed statistics, logging, and distributed tracing data should be generated so that operators can understand distributed traffic flow and debug problems as they occur.
All of the previous items are the responsibility of the service mesh data plane. In effect, the sidecar proxy is the data plane. Said another way, the data plane is responsible for conditionally translating, forwarding, and observing every network packet that flows to and from a service instance.
The control plane
The network abstraction that the sidecar proxy data plane provides is magical. However, how does the proxy actually know to route /foo to service B? How is the service discovery data that the proxy queries populated? How are the load balancing, timeout, circuit breaking, etc. settings specified? How are deploys accomplished using blue/green or gradual traffic shifting semantics? Who configures systemwide authentication and authorization settings?
All of the above items are the responsibility of the service mesh control plane. The control plane takes a set of isolated stateless sidecar proxies and turns them into a distributed system.
The reason that I think many technologists find the split concepts of data plane and control plane confusing is that for most people the data plane is familiar while the control plane is foreign. We’ve been around physical network routers and switches for a long time. We understand that packets/requests need to go from point A to point B and that we can use hardware and software to make that happen. The new breed of software proxies are just really fancy versions of tools we have been using for a long time.
However, we have also been using control planes for a long time, though most network operators might not associate that portion of the system with a piece of technology. There reason for this is simple — most control planes in use today are… us.
Figure 2 shows what I call the “human control plane.” In this type of deployment (which is still extremely common), a (likely grumpy) human operator crafts static configurations — potentially with the aid of some scripting tools — and deploys them using some type of bespoke process to all of the proxies. The proxies then consume the configuration and proceed with data plane processing using the updated settings.
Figure 3 shows an “advanced” service mesh control plane. It is composed of the following pieces:
The human: There is still a (hopefully less grumpy) human in the loop making high level decisions about the overall system.
Control plane UI: The human interacts with some type of UI to control the system. This might be a web portal, a CLI, or some other interface. Through the UI, the operator has access to global system configuration settings such as deploy control (blue/green and/or traffic shifting), authentication and authorization settings, route table specification (e.g., when service A requests /foo what happens), and load balancer settings (e.g., timeouts, retries, circuit breakers, etc.).
Workload scheduler: Services are run on an infrastructure via some type of scheduling system (e.g., Kubernetes or Nomad). The scheduler is responsible for bootstrapping a service along with its sidecar proxy.
Service discovery: As the scheduler starts and stops service instances it reports liveness state into a service discovery system.
Sidecar proxy configuration APIs: The sidecar proxies dynamically fetch state from various system components in an eventually consistent way without operator involvement. The entire system composed of all currently running service instances and sidecar proxies eventually converge. Envoy’s universal data plane API is one such example of how this works in practice.
Ultimately, the goal of a control plane is to set policy that will eventually be enacted by the data plane. More advanced control planes will abstract more of the system from the operator and require less handholding (assuming they are working correctly!).
Data plane vs. control plane summary
Service mesh data plane: Touches every packet/request in the system. Responsible for service discovery, health checking, routing, load balancing, authentication/authorization, and observability.
Service mesh control plane: Provides policy and configuration for all of the running data planes in the mesh. Does not touch any packets/requests in the system. The control plane turns all of the data planes into a distributed system.
Current project landscape
With the above explanation out of the way, let’s take a look at the current service mesh landscape.
Instead of doing an in-depth analysis of each solution above, I’m going to briefly touch on some of the points that I think are causing the majority of the ecosystem confusion right now.
Linkerd was one of the first service mesh data plane proxies on the scene in early 2016 and has done a fantastic job of increasing awareness and excitement around the service mesh design pattern. Envoy followed about 6 months later (though was in production at Lyft since late 2015). Linkerd and Envoy are the two projects that are most commonly mentioned when discussing “service meshes.”
Istio was announced May, 2017. The project goals of Istio look very much like the advanced control plane illustrated in figure 3. The default proxy of Istio is Envoy. Thus, Istio is the control plane and Envoy is the data plane. In a short time, Istio has garnered a lot of excitement, and other data planes have begun integrations as a replacement for Envoy (both Linkerd and NGINX have demonstrated Istio integration). The fact that it’s possible for a single control plane to use different data planes means that the control plane and data plane are not necessarily tightly coupled. An API such as Envoy’s universal data plane API can form a bridge between the two pieces of the system.
Nelson and SmartStack help further illustrate the control plane vs. data plane divide. Nelson uses Envoy as its proxy and builds a robust service mesh control plane around the HashiCorp stack (i.e. Nomad, etc.). SmartStack was perhaps the first of the new wave of service meshes. SmartStack forms a control plane around HAProxy or NGINX, further demonstrating that it’s possible to decouple the service mesh control plane and the data plane.
The service mesh microservice networking space is getting a lot of attention right now (rightly so!) with more projects and vendors entering all the time. Over the next several years, we will see a lot of innovation in both data planes and control planes, and further intermixing of the various components. The ultimate result should be microservice networking that is more transparent and magical to the (hopefully less and less grumpy) operator.
A service mesh is composed of two disparate pieces: the data plane and the control plane. Both are required. Without both the system will not work.
Everyone is familiar with the control plane — albeit the control plane might be you!
All of the data planes compete with each other on features, performance, configurability, and extensibility.
All of the control planes compete with each other on features, configurability, extensibility, and usability.
A single control plane may contain the right abstractions and APIs such that multiple data planes can be used.
This tutorial will walk you through how to create a single job using a DSL script; and then add a few more.
1. Creating the Seed Job
We use a Free-style Jenkins Job as a place to run the DSL scripts. We call this a “Seed Job”. Since it’s a normal Job you’ll get all the standard benefits of Jenkins: history, logs, emails, etc. We further enhance the Seed Job to show which Jobs got created from the DSL script, in each build and on the Seed Job page.
The first step is to create this Job.
From the Jenkins main page, select either the “New Job” or “Create new Jobs” link. A new job creation page will be displayed.
Fill in the name field, e.g. “tutorial-job-dsl-1”
Select the “Build a free-style software project” radio button.
Click the OK button
2. Adding a DSL Script
Now that we have created our empty Seed Job we need to configure it. We’re going to add a build step to execute the Job DSL script. Then we can paste in an example script as follows:
On the configure screen, scroll down to the “Build: Add build step” pull down menu
From the pull down menu, select “Process Job DSLs”. You should be presented with two radio buttons. The default will be “Use the provided DSL script” and a text input box will be displayed below it.
Copy the following DSL Script block into the input box. (Note: The job resulting from this will be called DSL-Tutorial-1-Test. It’ll check a GitHub repo every 15 minutes, then run ‘clean test’ if there’s any changes found.)
Click the “Save” button. You’ll be shown the overview page for the new Seed job you just created.
3. Run the Seed Job and Generate the new Jobs from the Script
The Seed Job is now all set up and can be run, generating the Job we just scripted.
(Note: As it stands right now, we didn’t setup any build triggers to run the job automatically but we could have, using the standard Jenkins UI in Step 2.)
Let’s just run it ourselves manually.
Click the “Build Now” link/button on the tutorial-job-dsl-1 overview page. It should only take a second to run.
Look at the build result to see a link to the new Job which has been created by the running of your DSL script in the Seed Job. You should see this in the section called “Generated Jobs”. (If you don’t see it, you probably have Auto-Refresh disabled. Enable it, or just refresh the page and then you’ll see the new job.)
Follow this link to your new Job. You can run this new script-generated Job manually or wait the 15 minutes for the scm trigger to kick in.
(Note: if you have a new Jenkins server, you might be missing the Git plugin or a Maven installation which Jenkins knows about. That could cause this job to fail when run. If you need to add these, be sure to re-run the Seed Job to make sure the Scripted Job is configured correctly – it won’t be if you ran without all the necessary plugins installed in Jenkins.)
(Additional Note: if the build still fails with these plugins / config set up, it may be because the new job is using a “default” maven rather than the one you just added.)
4. Adding additional Jobs to the DSL Script
To show some more of the power of the DSL Plugin, let’s create a bunch more Jobs.
Go back to the ‘tutorial-job-dsl-1’ Seed Job
Click the “Configure” link/button and navigate back down the the “Process Job DSLs” build step.
Add the following into the text box, below the script which we added at the beginning.
(Note: The practicality of this block is questionable, but it could be used to shard your tests into different jobs.)
There are numerous criteria a team can use to define their Definition of Done. This ensures that teams deliver features that are completed in terms of functionality and quality. Definition of Done is an auditable checklist. The following is a set of possible criteria and activities for a DoD:
Unit Tests Passed
Acceptance Criteria Met
Functional Tests Passed
Non-Functoinal Requirements Met
Product Owner Accepts User Story
User Story Example
The following is an example of a user story.
Cohn, Mike. User Stories Applied: For Agile Software Development. 1st ed., Addison-Wesley Professional, 2004.
Wake, William C. Extreme Programming Explored. Addison Wesley, 2002.