Posted in Software Engineering

Stuck in terminating state

So AWS launched their hosted Kubernetes called EKS (Elastic Kubernetes Service) last year and we merrily jumped onboard and deployed most of our services in the months since then. Everything was looking great and working amazing until I started doing some cleanup on the platform. I am sure that it was possibly my fault for adding deployments, removing deployments, moving nodes around, terminating nodes, etc. over a period of time, I pretty much scrambled its marbles.

I removed everything from a namespace I had created called ‘logging’ as I was moving all the resource to a different namespace called ‘monitoring’. I then proceeded to delete the namespace using kubectl delete namespace logging. Everything looked great and when I listed the namespaces and it was showing the state of that namespace as Terminating. I proceeded with my day to day DevOps routine and came back later to find the namespace still stuck in the Terminatingstate. I ignored it and went and had a great weekend. Coming back on Monday, that same namespace was still there, stuck on Terminating. Frustrated I turned to Google and started looking if anyone had the same issue. There were quite a few results, tried most of them to no avail until I stumbled on this issue: https://github.com/kubernetes/kubernetes/issues/60807. I tried most of the recommendations also to no avail until I started mixing and matching some of the commands, and then finally, I got rid of that namespace that was stuck in the Terminating state. I know, I could have left it there and not bothered with it, but my OCD kicked in and I wanted it gone 😉

So it turns out I had to remove the finalizer for kubernetes. But the catch was not to just apply the change using kubectl apply -f, it had to go via the cluster API for it to work.


Here are the instructions I used to delete that namespace:

Step 1: Dump the descriptor as JSON to a file

kubectl get namespace logging -o json > logging.json

Open the file for editing:

{
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
        "creationTimestamp": "2019-05-14T13:55:20Z",
        "labels": {
            "name": "logging"
        },
        "name": "logging",
        "resourceVersion": "29571918",
        "selfLink": "/api/v1/namespaces/logging",
        "uid": "e9516a8b-764f-11e9-9621-0a9c41ba9af6"
    },
    "spec": {
        "finalizers": [
            "kubernetes"
        ]
    },
    "status": {
        "phase": "Terminating"
    }
}

Remove kubernetes from the finalizers array:

{
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
        "creationTimestamp": "2019-05-14T13:55:20Z",
        "labels": {
            "name": "logging"
        },
        "name": "logging",
        "resourceVersion": "29571918",
        "selfLink": "/api/v1/namespaces/logging",
        "uid": "e9516a8b-764f-11e9-9621-0a9c41ba9af6"
    },
    "spec": {
        "finalizers": [
        ]
    },
    "status": {
        "phase": "Terminating"
    }
}

Step 2: Executing our cleanup command

Now that we have that setup we can instruct our cluster to get rid of that annoying namespace:

kubectl replace –raw “/api/v1/namespaces/logging/finalize” -f ./logging.json

Where:/api/v1/namespaces/<your_namespace_here>/finalize

After running that command, the namespace should now be absent from your namespaces list.

The key thing to note here is the resource you are modifying, in our case, it is for namespaces, it could be pods, deployments, services, etc. This same method can be applied to those resources stuck in Terminating state.

Posted in Software Engineering

12 factor application and kubernetes

The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion — where over time an application that’s not updated gets to be out of sync with the latest operating systems, security patches, and so on — an app should follow these 12 principles:

  1. Codebase
    One codebase tracked in revision control, many deploys
  2. Dependencies
    Explicitly declare and isolate dependencies
  3. Config
    Store config in the environment
  4. Backing services
    Treat backing services as attached resources
  5. Build, release, run
    Strictly separate build and run stages
  6. Processes
    Execute the app as one or more stateless processes
  7. Port binding
    Export services via port binding
  8. Concurrency
    Scale-out via the process model
  9. Disposability
    Maximize robustness with fast startup and graceful shutdown
  10. Dev/prod parity
    Keep development, staging, and production as similar as possible
  11. Logs
    Treat logs as event streams
  12. Admin processes
    Run admin/management tasks as one-off processes

Let’s look at what all of this means in terms of Kubernetes applications.

Principle 1 Codebase

Principle 1 of a 12 Factor App is “One codebase tracked in revision control, many deploys”.

For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store-specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

...
spec:
       containers:
       - name: AcctApp
         image: acctApp:v3
...

In this way, you might have multiple versions of your application running in different deployments.

Applications can also behave differently depending on the configuration information with which they run.

Principle 2 Dependencies

Principle 2 of a 12 Factor App is “Explicitly declare and isolate dependencies”.

Making sure that an application’s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12-factor app must be self-contained.

That includes making sure that the application is isolated enough that it’s not affected by conflicting libraries that might be installed on the host machine.

Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)

For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.

Principle 3 Config

Principle 3 of a 12 Factor App is “Store config in the environment”.

The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.

Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.

Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, “unlikely to be checked into the repository by accident”, and they’re operating system independent.

Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that’s not a complete solution.

Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:

apiVersion: v1
 kind: Pod
 metadata:
   name: secret-env-pod
 spec:
   containers:
     - name: mycontainer
       image: redis
       env:
         - name: SECRET_USERNAME
           valueFrom:
             secretKeyRef:
               name: mysecret
               key: username
         - name: SECRET_PASSWORD
           valueFrom:
             secretKeyRef:
               name: mysecret
               key: password
         - name: CONFIG_VERSION
           valueFrom:
             configMapKeyRef:
               name: redis-app-config
               key: version.id

As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.

Of course, there’s still a risk of someone mishandling the files used to create these objects, but it’s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.

What’s more, there are those in the community that points out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.

Principle 4 Backing services

Principle 4 of the 12 Factor App is “Treat backing services as attached resources”.

In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service — via an HTTP or similar request — and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.

For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.

This requirement has two implications for a Kubernetes-based application.

First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn’t want to have a local Mysql instance, even if you’re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).

Similarly, if you’re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.

Note that both of these examples assume that though you’re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.

Principle 5 Build, release, run

Principle 5 of the 12 Factor App is “Strictly separate build and run stages”.

These days it’s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.

Releases should be identifiable.  You should be able to say, ‘This deployment is running Release 1.14 of this application” or something similar, the same way we say we’re running “the OpenStack Ocata release” or “Kubernetes 1.6”.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say “application” we’re no longer talking about large, monolithic releases.  Instead, we’re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.

All of this is so that when the app is running, that “run” process can be completely automated. Twelve-factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.

Translating this to the Kubernetes realm, we’ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That’s your build process.  We talked about separating out the configuration information, so that’s what needs to be combined with the build to make a release. And the ability to automatically run the application — or multiple copies of the application — is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.

Principle 6 Processes

Principle 6 of the 12 Factor App is “Execute the app as one or more stateless processes”.

Stateless processes are a core idea behind cloud-native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.

If you’re new to cloud application programming, this might be deceptively simple; many developers are used to “sticky” sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.

Instead, if you’re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.

One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn’t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement — but that’s probably pushing it a bit.

Principle 7 Port binding

Principle 7 of the 12 Factor App is “Export services via port binding”.

In an environment where we’re assuming that different functionalities are handled by different processes, it’s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it’s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it’s using HTTP or another protocol.

In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.

Principle 8 Concurrency

Principle 8 of the 12 Factor App is to “Scale-out via the process model”.

When you’re writing a twelve-factor app, make sure that you’re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.

Principle 9 Disposability

Principle 9 of the 12 Factor App is to “Maximize robustness with fast startup and graceful shutdown”.

It seems like this principle was tailor-made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won’t be affected, either because there are others to take its place, because it’ll start right up again, or both.

Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.

Principle 10 Dev/prod parity

Principle 10 of the 12 Factor App is “Keep development, staging, and production as similar as possible”.

This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-and-cluster to create near-clones of production systems.

At a deeper level, though, as the Twelve-Factor App manifesto puts it, it’s about three different types of “gaps”:

  • The time gap: A developer may work on code that takes days, weeks, or even months to go into production.
  • The personnel gap: Developers write code, ops engineers deploy it.
  • The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.

Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.

For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they’re well-suited to this kind of environment.

As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.

Principle 11 Logs

Principle 11 of the 12 Factor App is to “Treat logs as event streams”.

While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it’s the execution environment that’s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.

In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you’re using Google Cloud, and Elasticsearch if you’re not.  You can find more information on setting Kubernetes logging destinations here.

Principle 12 Admin processes

Principle 12 of the 12 Factor App is “Run admin/management tasks as one-off processes”.

This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they’re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.

You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.

Posted in Software Engineering

Kubernetes HPA, Custom metrics, and prometheus

The early version of Horizontal Pod Autoscaler (HPA) was limited in features and it only supported scaling deployments based on CPU metrics. The most recent Kubernetes releases included support for Memory, multiple metrics and in the latest version Custom Metrics.

In this article I’ll cover how to store custom metrics using prometheus and use that to scale using Horizontal Pod Autoscaler.

Prerequisite

HELM

Install Helm version 3 (https://helm.sh/). Following are steps to install helm

  • From Homebrew (macOS)

Members of the Kubernetes community have contributed a Helm formula build to Homebrew. This formula is generally up to date.

brew install helm

(Note: There is also a formula for emacs-helm, which is a different project.)

  • From Chocolatey (Windows)

Members of the Kubernetes community have contributed a Helm package build to Chocolatey. This package is generally up to date.

choco install kubernetes-helm
  • From Snap (Linux)

The Snapcrafters community maintains the Snap version of the Helm package:

sudo snap install helm --classic
  • From Script

Helm now has an installer script that will automatically grab the latest version of Helm and install it locally.

You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh

Yes, you can curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash that if you want to live on the edge.

Kubernetes Metrics Server

Run the following command, this will install metrics-server to kube-system namespace

helm install metrics-server stable/metrics-server -n kube-system

Prometheus

Run the following command, this will install prometheus to kube-system namespace

helm install prometheus stable/prometheus  -n kube-system

Prometheus Adapter

Prometheus Adapter Custom purposes is to communicate using Metrics API. Custom metrics are used in Kubernetes by Horizontal Pod Autoscalers to scale workloads based upon your own metric pulled from an external metrics provider like Prometheus. This chart complements the metrics-server chart that provides resource only metrics.

helm install prometheus-adapter --set rbac.create=true,prometheus.url=http://prometheus-server, prometheus.port=80 stable/prometheus-adapter -n kube-system

After installing prometheus adapter. Wait 2-3 minutes, then use the following command to check if the prometheus adapter is working.

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq

if the installation is working, it displays a bunch of information.

Steps

Http Request Custom Metrics

After ensuring that the installation is working, now we will set up a custom metrics rule to collect HTTP requests.

kubectl edit cm prometheus-adapter -n kube-system

Add HTTP request as the following

apiVersion: v1
data:
  config.yaml: |
    rules:
    - seriesQuery: 'http_requests_total{kubernetes_namespace!="",kubernetes_pod_name!=""}'
      resources:
        overrides:
          kubernetes_namespace: {resource: "namespace"}
          kubernetes_pod_name: {resource: "pod"}
      name:
        matches: "^(.*)_total"
        as: "${1}_per_second"
      metricsQuery: 'sum(rate({}[2m])) by ()'

Pod Deployment

Now deploy a prometheus instrumented pod (https://prometheus.io/docs/practices/instrumentation/), following is an example

Create a file podinfo-dep.yml as the following:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: podinfo
  namespace: temp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: podinfo
      annotations:
        prometheus.io/scrape: 'true'
    spec:
      containers:
      - name: podinfod
        image: stefanprodan/podinfo:0.0.1
        imagePullPolicy: Always
        command:
          - ./podinfo
          - -port=9898
          - -logtostderr=true
          - -v=2
        ports:
        - containerPort: 9898
          protocol: TCP
        resources:
          requests:
            memory: "32Mi"
            cpu: "1m"
          limits:
            memory: "256Mi"
            cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
  name: podinfo
  namespace: temp
  labels:
    app: podinfo
spec:
  type: NodePort
  ports:
    - port: 9898
      targetPort: 9898
      nodePort: 31198
      protocol: TCP
  selector:
    app: podinfo

run kubernetes apply to deploy to the kubernetes cluster, as the following:

kubectl apply -f podinfo-dep.yml

Wait 2-3 minutes, to collect the metrics. Then run the following command:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/temp/pods/*/http_requests_per_second" | jq .

Expected output, will similarly look like this:

 {
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/temp/pods/%2A/http_requests_per_second"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "temp",
        "name": "podinfo-58b68656c9-4nx52",
        "apiVersion": "/v1"
      },
      "metricName": "http_requests_per_second",
      "timestamp": "2020-04-21T03:35:53Z",
      "value": "850m",
      "selector": null
    }
  ]
}

HPA Configuration

Now we have everything ready. We can configure an HPA autoscaler for our deployment using the custom metrics.

Create hpa.yml as the following:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
  namespace: temp
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
  minReplicas: 1
  maxReplicas: 6
  metrics:
  - type: Pods
    pods:
      metricName: http_requests_per_second 
      targetAverageValue: 2000m

Deploy the files to kubernetes using the following command:

kubectl apply -f hpa.yml

Well done, You are doing great ! all the configuration is completed, now the testing time.

Testing

Assuming you deploy using temp namespace you can run the following command to generate traffic

 kubectl run apache-bench -i --tty --rm --image=httpd -- ab -n 500000 -c 1000 http://podinfo.temp.svc.cluster.local:9898//

open another console and run the following command:

kubectl get hpa -n temp -w                                                                                                                                       NAME      REFERENCE            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE

output will be something like this:

podinfo   Deployment/podinfo   850m/2    3         6         3          

After a while the output will change to the following:

podinfo   Deployment/podinfo    65755m/2    3         6         6          

If you are successfully run without any problem, It is an indication that HPA is working to scale out the pods using custom metrics.
Congratulations !!!

Posted in Software Engineering

Seting up bugzilla using docker

Step1: Git clone bugzilla

git clone https://github.com/mozilla-bteam/bmo.git

Step2: Build docker image

docker-compose build

Step3: Bring up docker image

docker-compose up

Step4: Connect to docker bugzilla instance

Here we need to create a new user ‘bugs’ with ALL privileges and password as ‘bugs’ on localhost for new database as ‘bugs’.

$docker exec -it bugzilla_bugzilla_1 su — bugzillaLast login: Mon May 29 21:11:59 UTC 2017
[bugzilla@1f5025cbfde5 ~]$ mysql -h localhost -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 70
Server version: 5.6.36 MySQL Community Server (GPL)Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql>mysql> CREATE USER 'bugs'@'localhost' IDENTIFIED BY 'bugs';mysql> SELECT User FROM mysql.user;mysql> GRANT ALL PRIVILEGES ON *.* TO 'bugs'@'localhost' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW GRANTS FOR ‘bugs’@’localhost’;
+ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — +
| Grants for bugs@localhost |
+ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — +
| GRANT ALL PRIVILEGES ON *.* TO ‘bugs’@’localhost’ IDENTIFIED BY PASSWORD ‘*F6143BCA58806D14CD1C97998C6792405D8AE8AE’ WITH GRANT OPTION |
+ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — +
1 row in set (0.00 sec)[bugzilla@1f5025cbfde5 bugzilla]$ mysql -h localhost -u bugs -D bugs -p
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -AWelcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 55
Server version: 5.6.36 MySQL Community Server (GPL)Copyright © 2000, 2017, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.mysql>

Step5: Now run ‘perl checksetup.pl’

[bugzilla@1f5025cbfde5 bugzilla]$ cd /home/bugzilla/devel/htdocs/bugzilla
[bugzilla@1f5025cbfde5 bugzilla]$ pwd
/home/bugzilla/devel/htdocs/bugzilla[bugzilla@1f5025cbfde5 bugzilla]$ perl checksetup.pl
.
.
.
.
.
.
Adding new table bz_schema…
Initializing bz_schema…
Creating tables…
Converting attach_data maximum size to 100G…
Setting up choices for standard drop-down fields:
 priority bug_status rep_platform resolution bug_severity op_sys
Creating ./data directory…
Creating ./data/assets directory…
Creating ./data/attachments directory…
Creating ./data/db directory…
Creating ./data/extensions directory…
Creating ./data/mining directory…
Creating ./data/webdot directory…
Creating ./graphs directory…
Creating ./skins/custom directory…
Creating ./data/extensions/additional…
Creating ./data/mailer.testfile…
Precompiling templates…done.
Fixing file permissions…
Initializing “Dependency Tree Changes” email_setting …
Initializing “Product/Component Changes” email_setting …
Marking closed bug statuses as such…
Creating default classification ‘Unclassified’…
Setting up foreign keys…
Setting up the default status workflow…
Creating default groups…
Setting up user preferences…Looks like we don’t have an administrator set up yet. Either this is
your first time using Bugzilla, or your administrator’s privileges
might have accidentally been deleted.Enter the e-mail address of the administrator: deepak.shakya@gyana.space
Enter the login name the administrator will log in with: dshakya
Enter the real name of the administrator: Deepak Shakya
Enter a password for the administrator account:
Please retype the password to verify:
dshakya is now set up as an administrator.
Creating initial dummy product ‘TestProduct’…Now that you have installed Bugzilla, you should visit the ‘Parameters’
page (linked in the footer of the Administrator account) to ensure it
is set up as you wish — this includes setting the ‘urlbase’ option to
the correct URL.
checksetup.pl complete.

If no obvious errors in the above step, then bugzilla is ready to use.

You can open a browser and use the following address to connect to bugzilla

http://localhost:8080/bugzilla/

Bugzilla home page snapshot
Posted in Software Engineering

10 Key Attributes of Cloud-Native Applications

  1. Packaged as lightweight containers: Cloud-native applications are a collection of independent and autonomous services that are packaged as lightweight containers. Unlike virtual machines, containers can scale-out and scale-in rapidly. Since the unit of scaling shifts to containers, infrastructure utilization is optimized.
  2. Developed with best-of-breed languages and frameworks: Each service of a cloud-native application is developed using the language and framework best suited for the functionality. Cloud-native applications are polyglot; services use a variety of languages, runtimes and frameworks. For example, developers may build a real-time streaming service based on WebSockets, developed in Node.js, while choosing Python and Flask for exposing the API. The fine-grained approach to developing microservices lets them choose the best language and framework for a specific job.
  3. Designed as loosely coupled microservices: Services that belong to the same application discover each other through the application runtime. They exist independent of other services. Elastic infrastructure and application architectures, when integrated correctly, can be scaled-out with efficiency and high performance.

Loosely coupled services allow developers to treat each service independent of the other. With this decoupling, a developer can focus on the core functionality of each service to deliver fine-grained functionality. This approach leads to efficient lifecycle management of the overall application, because each service is maintained independently and with clear ownership.

  1. Centered around APIs for interaction and collaboration: Cloud-native services use lightweight APIs that are based on protocols such as representational state transfer (REST), Google’s open source remote procedure call (gRPC) or NATS. REST is used as the lowest common denominator to expose APIs over hypertext transfer protocol (HTTP). For performance, gRPC is typically used for internal communication among services. NATS has publish-subscribe features which enable asynchronous communication within the application.
  2. Architected with a clean separation of stateless and stateful services: Services that are persistent and durable follow a different pattern that assures higher availability and resiliency. Stateless services exist independent of stateful services. There is a connection here to how storage plays into container usage. Persistence is a factor that has to be increasingly viewed in context with state, statelessness and — some would argue — micro-storage environments.
  3. Isolated from server and operating system dependencies: Cloud-native applications don’t have an affinity for any particular operating system or individual machine. They operate at a higher abstraction level. The only exception is when a microservice needs certain capabilities, including solid-state drives (SSDs) and graphics processing units (GPUs), that may be exclusively offered by a subset of machines.
  4. Deployed on self-service, elastic, cloud infrastructure: Cloud-native applications are deployed on virtual, shared and elastic infrastructure. They may align with the underlying infrastructure to dynamically grow and shrink — adjusting themselves to the varying load.
  5. Managed through agile DevOps processes: Each service of a cloud-native application goes through an independent life cycle, which is managed through an agile DevOps process. Multiple continuous integration/continuous delivery (CI/CD) pipelines may work in tandem to deploy and manage a cloud-native application.
  6. Automated capabilities: Cloud-native applications can be highly automated. They play well with the concept of infrastructure as code. Indeed, a certain level of automation is required simply to manage these large and complex applications.
  7. Defined, policy-driven resource allocation: Finally, cloud-native applications align with the governance model defined through a set of policies. They adhere to policies such as central processing unit (CPU) and storage quotas, and network policies that allocate resources to services. For example, in an enterprise scenario, central IT can define policies to allocate resources for each department. Developers and DevOps teams in each department have complete access and ownership to their share of resources.