Posted in Information Technology

How to Convert From RPM to DEB and DEB to RPM Package Using Alien

As I’m sure you already know, there are plenty of ways to install software in Linux: using the package management system provided by your distribution (aptitude, yum, or zypper, to name a few examples), compiling from source (though somewhat rare these days, it was the only method available during the early days of Linux), or utilizing a low level tool such as dpkg or rpm with .deb and .rpm standalone, precompiled packages, respectively.

Convert RPM to DEB and DEB to RPM

Convert RPM to DEB and DEB to RPM Package Using Alien

In this article we will introduce you to alien, a tool that converts between different Linux package formats, with .rpm to .deb (and vice versa) being the most common usage.

This tool, even when its author is no longer maintaining it and states in his website that alien will always probably remain in experimental status, can come in handy if you need a certain type of package but can only find that program in another package format.

For example, alien saved my day once when I was looking for a .deb driver for a inkjet printer and couldn’t find any – the manufacturer only provided a .rpm package. I installed alien, converted the package, and before long I was able to use my printer without issues.

That said, we must clarify that this utility should not be used to replace important system files and libraries since they are set up differently across distributions. Only use alien as a last resort if the suggested installation methods at the beginning of this article are out of the question for the required program.

Last but not least, we must note that even though we will use CentOS and Debian in this article, alien is also known to work in Slackware and even in Solaris, besides the first two distributions and their respective families.

Step 1: Installing Alien and Dependencies

To install alien in CentOS/RHEL 7, you will need to enable the EPEL and the Nux Dextop (yes, it’s Dextop – not Desktop) repositories, in that order:

# yum install epel-release
# rpm --import

The latest version of the package that enables this repository is currently 0.5 (published on Aug. 10, 2015). You should check to see whether there’s a newer version before proceeding further:

# rpm -Uvh

then do,

# yum update && yum install alien

In Fedora, you will only need to run the last command.

In Debian and derivatives, simply do:

# aptitude install alien

Step 2: Converting from .deb to .rpm Package

For this test we have chosen dateutils, which provides a set of date and time utilities to deal with large amounts of financial data. We will download the .deb package to our CentOS 7 box, convert it to .rpm and install it:

Check CentOS Version

Check CentOS Version

# cat /etc/centos-release
# wget
# alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb

Convert .deb to .rpm package in Linux

Convert .deb to .rpm package in Linux

Important: (Please note how, by default, alien increases the version minor number of the target package. If you want to override this behavior, add the –keep-version flag).

If we try to install the package right away, we will run into a slight issue:

# rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm 

Install RPM Package

Install RPM Package

To solve this issue, we will enable the epel-testing repository and install the rpmrebuild utility to edit the settings of the package to be rebuilt:

# yum --enablerepo=epel-testing install rpmrebuild

Then run,

# rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm

Which will open up your default text editor. Go to the %files section and delete the lines that refer to the directories mentioned in the error message, then save the file and exit:

Convert .deb to Alien Version

Convert .deb to Alien Version

When you exit the file you will be prompted to continue with the rebuild. If you choose Y, the file will be rebuilt into the specified directory (different than the current working directory):

# rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm

Build RPM Package

Build RPM Package

Now you can proceed to install the package and verify as usual:

# rpm -Uvh /root/rpmbuild/RPMS/x86_64/dateutils-0.3.1-2.1.x86_64.rpm
# rpm -qa | grep dateutils

Install Build RPM Package

Install Build RPM Package

Finally, you can list the individual tools that were included with dateutils and alternatively check their respective man pages:

# ls -l /usr/bin | grep dateutils

Verify Installed RPM Package

Verify Installed RPM Package

Step 3: Converting from .rpm to .deb Package

In this section we will illustrate how to convert from .rpm to .deb. In a 32-bit Debian Wheezy box, let’s download the .rpm package for the zsh shell from the CentOS 6 OS repository. Note that this shell is not available by default in Debian and derivatives.

# cat /etc/shells
# lsb_release -a | tail -n 4

Check Shell and Debian OS Version

Check Shell and Debian OS Version

# wget
# alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm

You can safely disregard the messages about a missing signature:

Convert .rpm to .deb Package

Convert .rpm to .deb Package

After a few moments, the .deb file should have been generated and be ready to install:

# dpkg -i zsh_4.3.11-5_i386.deb

Install RPM Converted Deb Package

Install RPM Converted Deb Package

After the installation, you can verify that zsh is added to the list of valid shells:

# cat /etc/shells

Confirm Installed Zsh Package

Confirm Installed Zsh Package


In this article we have explained how to convert from .rpm to .deb and vice versa to install packages as a last resort when such programs are not available in the repositories or as distributable source code. You will want to bookmark this article because all of us will need alien at one time or another.

Feel free to share your thoughts about this article using the form below.



How to Convert From RPM to DEB and DEB to RPM Package Using Alien

Posted in microservices, Software Architecture

Microservices Architectures: What Is Fault Tolerance?

In this article, we discuss an important property of microservices, called fault tolerance.

You Will Learn

  • What is Fault Tolerance?
  • Why is fault tolerance important in microservices architecture?
  • How do you achieve fault tolerance?

What Is Fault Tolerance?

Microservices need to be extremely reliable.

When we build a microservices architecture, there are a large number of small microservices, and they all need to communicate with one another.

Lets consider the following example:

Basic microservices architecture

Let’s say Microservice5 is down at some point in time.

All the other microservices are directly or indirectly dependent on it, so they all go down as well.

The solution to this problem is to have a fallback in case a microservice fails. This aspect of a microservice is called fault tolerance.

Implementing Fault Tolerance With Hystrix

A popular framework used to implement fault tolerance is Hystrix, a Netflix open source framework. Here is some sample Hystrix code:

public LimitConfiguration retrieveConfiguration() {
throw new RuntimeException("Not Available");

public LimitConfiguration fallbackRetrieveConfiguration() {
return new LimitConfiguration(999, 9);

Hystrix enables you to specify the fallback method for each of your service methods. If the method throws an exception, what should be returned to the service consumer?

Here, if retrieveConfiguration() fails, then fallbackRetrieveConfiguration is called, which returns a hardcoded LimitConfiguration instance:

Hystrix and Alerts

With Hystrix, you can also configure alerts at the backend. If a service starts failing continuously, you can send alerts to the maintainance team.

Hystrix Is Not a Silver Bullet

Using Hystrix and fallback methods is appropriate for services that handle non-critical information.

However, it is not a silver bullet.

Consider, for instance, a service that returns the balance of a bank account. You cannot provide a default hardcoded value back.

Using Sufficient Redundancy

It is important to design critical services in a fail safe manner. It is important to build enough redundancy into the system to ensure that the services do not fail.

Have Sufficient Testing

It is important to test for failure. Bring a microservice down. See how your system reacts.

Chaos Monkey from Netflix is a good example of this.


In this article, we discussed fault tolerance. We saw how fault tolerance is essential in a microservices architecture. We then saw how it can be implemented at the code level using frameworks such as Hystrix.

Posted in Information Technology, microservices, Software Architecture

What Is Service Discovery?

When we talk about a microservices architecture, we refer to a system with a large number of small services, working with each other:

Basic Microservices Architecture

Basic Mircoservices Architecture

An important feature of such architectures is auto-scaling. The number of instances of a microservice varies based on the system load. Initially, you could have 5 instances of Microservice5, which go up later to 20, 100, or 1000!

Two important questions arise

  • How does Microservice4 know how many instances of Microservice5 are present, at a given time?
  • In addition, how does it distribute the load among all of them?

Hardcoding URLs Is Not an Option

One way to do this is to hard-code the URLs of Microservice5 instances, within Microservice4. That means every time the number of Microservice5 instances changes (with the addition of new one or the deletion of existing one), the configuration within Microservice4 needs to change. This is a big headache.

Using Service Discovery

Ideally, you want to change the number of instances of Microservice5 based on the load, and make Microservice4 dynamically aware of the instances.

That’s where the concept of Service Discovery comes into the picture.

The component that provides this service is generally called a naming server.

All instances of all the microservices register themselves with the naming server. Whenever a microservice wants to talk to another microservices, it asks the naming server about the available instances.

In the example above, whenever a new instance of Microservice5 is launched, it registers with the naming server. When Microservice4 wants to talk to Microservice5, it asks the naming server: what are the available instances of Microservice5?

Another Example of Service Discovery

Using Service Discovery to identify microservice instances helps keep things dynamic.

Let’s say there is a service for currency conversion:

The CurrencyConversionService (CCS) talks to the ForexService. At a certain point of time, these services have two instances each:

However, there could be a time where there are five instances of the ForexService (FS):

In that case, CurrencyConversionService needs to make sure that the load is evenly distributed across all the ForexService instances. It needs to answer two important questions:

  • How does the CurrencyConversionService know how many instances of ForexService are active?
  • How does the CurrencyConversionService distribute the load among those active instances?

When a CCS microservice instance is brought up, it registers with Eureka. The same thing happens with all instances of FS as well.

When a CCS instance needs to talk to an FS instance, it requests information from Eureka. Eureka would then return the URLS of the two FS instances active at that time. Here, the application makes use of a client-side load distribution framework called Ribbon. Ribbon ensures proper load distribution over the two FS instances, for events coming in from the CCS.


In this video, we talked about microservice service discovery. We saw that microservices need to be able to communicate with each other. The number of instances of a microservice changes over time, depending on the load. Service discovery enables us to dynamically adapt to new instances and distribute load among microservices.

Posted in Information Technology, microservices, Software Architecture

Why Centralized Configuration?

When we talk about a microservices architecture, we visualize a large number of small microservices talking to each other. The number of microservices depends on the size of the enterprise.

Basic Microservices ArchitectureBasic Microservices Architecture

The interesting part is that each of these microservices can have their own configuration.

Such configurations include details like:

  • Application configuration.
  • Database configuration.
  • Communication Channel Configuration – queues and other infrastructure.
  • URLs of other microservices to talk to.

In addition, each microservice will have a separate configuration for different environments, such as development, QA, and production.

If maintaining a single configuration for a large application is difficult, imagine maintaining configurations for hundreds of microservices in different environments.

Centralized Config Server to the Rescue

That’s where a centralized configuration server steps in.

Configuration for all microservices (for all environments) is stored at one place — a centralized configuration store.

When a microservice needs its configuration, it provides an ID at launch — a combination of the name of the microservice and the environment.

The centralized config server looks up the configuration and provides the configuration to the microservice.

Ensure that the configuration in a centralized config server is secured and has role-based access.

Introducing Spring Cloud Config Server

Spring Cloud Config Server is one of the popular implementations of a cloud config server.

Spring Cloud Config Server enables you to store all the configurations for multiple microservices for different environments in a git or SVN Repository. A set of folder structures and conventions needs to be followed for the setup to work.

Spring Cloud Config Server

A microservice can connect to the config server and identify itself, and also specify the instance it represents. This enables it to get the required configuration.

The setup ensures that the operations team does not need to take time out to configure the individual microservices on a case-by-case basis. All that they need to worry about is configuring the centralized config server, and starting to put relevant configurations into the git repository.

Automatically Picking Up Configuration Changes

An interesting feature present with the Spring Cloud Config Server is auto refresh. Whenever a change is committed to the git repository, configuration in the application is auto-refreshed.


In this article, we looked at why we need centralized configuration in microservices-based applications. We looked at how the Spring Cloud Config Server manages centralized configuration.

Posted in Information Technology, microservices, Software Architecture

The Need for API Gateways

Handling Cross Cutting Concerns

Whenever we design and develop a large software application, we make use of a layered architecture. For instance, in a web application, it is quite common to see an architecture similar to the following:

Web application architecture

Here, we see that the application is organized into a web layer, a business layer, and a data layer.

In a layered architecture, there are specific parts that are common to all these different layers. Such parts include:

  • Logging
  • Security
  • Performance
  • Auditing

All these features are applicable across layers, hence it makes sense to implement them in a common way.

Aspect Oriented programming is a well established way of handling these concerns. Use of constructs such as filters and interceptors is common while implementing them.

The Need for API Gateways

When we talk about a microservices architecture, we deal with multiple microservices talking to each other:Basic Microservices Architecture

Where do you implement all the features that are common across microservices?

  • Authentication
  • Logging
  • Auditing
  • Rate limiting

That’s where the API Gateway comes into the picture.

How Does an API Gateway Work?

In microservices, we route all requests — both internal and external — through API Gateways. We can implement all the common features like authentication, logging, auditing, and rate limiting in the API Gateway.

For example, you may not want Microservice3 to be called more than 10 times by a particular client. You could do that as part of rate limiting in the API gateway.

You can implement the common features across microservices in the API gateway. A popular API gateway implementation is the Zuul API gateway.


Just like AOP handles cross cutting concerns in standalone applications, API gateways manage common features for microservices in an enterprise.

Posted in Information Technology, microservices, Software Architecture

Microservices Architecture: The Importance of Centralized Logging

The Need for Visibility

In a microservices architecture, there are a number of small microservices talking to each other:

Basic microservices communication

In the above example, let’s assume there is a problem with Microservice5, due to which Microservice1 throws an error.

How does a developer debug the problem?

They would like to know the details of what’s happening in every microservice from Microservice1 through Microservice5. From such a trace, it should be possible to identify that something went wrong at Microservice5.

The more you break things down into smaller microservices, the more visibility you need into what’s going on in the background. Otherwise, a lot of time and effort needs to be spent in debugging problems.

One of the popular ways to improve visibility is by using centralized logging.

Centralized Logging Using Log Streams

Using Log Streams is one way to implement centralized logging. The common way to implement it is to stream microservice logs to a common queue. Distributed logging server listens to the queue and acts as log store. It provides search capabilities to search the trace.

Popular Implementations

Some of the popular implementations include

  • the ELK stack (Elastic Search, Logstash and Kibana) for Centralized Logging.
  • Zipkin, Open Tracing API, and Zaeger for Distributed Tracing.


In this article, we had a look at centralized logging. We saw that there is a need for high visibility in microservices architecture. Centralized logging provides visibility for better debugging of problems. Using log streams is one way of implementing centralized logging.

Posted in microservices, Software Architecture

Microservices Architecture: Introduction to Auto Scaling

The Load on Applications Varies

The load on your applications vary depending on time of the day, the day of the month or the month of the year.

Take for instance, It has very high loads during Thanksgiving, up to 20 times the normal load. However, during the major sports events such as the Super Bowl or a FIFA World Cup, the traffic could be considerably less – because every body is busy watching the event.

How can you setup infrastructure for applications to manage varying loads?

It is quite possible that the infrastructure needs to handle 10x the normal load.

If you have on-premise infrastructure, you need a large infrastructure in place to handle peak load.

During periods with less load, a lot of infrastructure would be sitting idle.

Cloud to the Rescue

That’s where cloud comes into the picture. With cloud, you can request more resources when the load is high and give them back to the cloud when you have less load.

This is called Scale Out (create more instances as the load increases) and Scale In (reduces instances as the load goes down)

How do you build applications that are cloud enabled, i.e. applications that work well in the cloud?

That’s where a microservices architecture comes into the picture.

Introducing Auto Scaling

Building your application using microservices enables you to increase the number of microservice instances during high load, and reduce them during times with less load.

Consider the following example of a CurrencyConversionService:

Basic Microservice ArchitectureBasic Microservice Architecture

The CurrencyConversionService talks to the ForexService. The ForexService is concerned with calculating how many INR can result from 1 USD, or how many INR can result from 1 EUR.

The CurrencyConversionService takes a bag of currencies and amounts and produces the total amount in a currency of your choice. For example, it will tell the total worth in INR of 10 EUR and 25 USD.

The ForexService might also be consumed from a number of other microservices.

Scaling Infrastructure to Match Load

The load on the ForexService might be different from the load on the CurrencyConversionService. You might need to have a different number of instances of the CurrencyConversionService and ForexService. For example, there may be two instances of the CurrencyConversionService, and five instances of the ForexService:

Basic Microservice ArchitectureBasic Microservice Architecture

At a later point in time, the load on the CurrencyConversionService could be low, needing just two instances. On the other hand, a much higher load on the ForexService could need 50 instances. The requests coming in from the two instances of CurrencyConversionService are distributed across the 50 instances of the ForexService.

That, in essence, is the requirement for auto scaling — a dynamically changing number of microservice instances, and evenly distributing the load across them.

Implementing Auto Scaling

There are a few important concepts involved in implementing auto scaling. The following sections discuss them in some detail.

Naming Server

Naming servers enable something called location transparency. Every microservice registers with the naming service. Any microservice that needs to talk to another microservice will ask the naming server for its location.

Whenever a new instance of CurrencyConversionService or ForexService comes up, it registers with the naming server.Basic Microservice Architecture Auto Scaling

When CurrencyConversionService wants to talk to ForexService, it asks the naming server for available instances.

Implementing Location Transparency

CurrencyConversionService knows that there are five instances of the ForexService.

How does it distribute the load among all these instances?

That’s where a load balancer comes into the picture.

A popular client side load balancing framework is Ribbon.Basic Microservice Architecture

Let’s look at a diagram to understand whats happening:

Load balancing framework

As soon as any instance of CurrencyConversionService or ForexService comes up, it registers itself with the naming server. If the CCSInstance2 wants to know the URL of ForexService instances, it again talks to the naming server. The naming server responds with a list of all instances of the ForexService — FSInstance1 and FSinstance2 — and their corresponding URLs.

The Ribbon load balancer does a round-robin among the ForexService instances to balance out the load among the instances.

Ribbon offers wide variety of load balancing algorithms to choose from.

When to Increase and Decrease Microservices Instances

There is one question we did not really talk about.

How do we know when to increase or decrease the number of instances of a microservices?

That is where application monitoring and container (Docker) management (using Kubernetes) comes into the picture.

Auto scaling microservices

An application needs to be monitored to find out how much load it has. For this, the application has to expose metrics for us to track the load.

You can containerize each microservice using Docker and create an image.

Kubernetes has the capability to manage containers. Kubernetes can be configured to auto scale based on the load. Kubernetes can identify the application instances, monitor their loads, and automatically scale up and down.


In this article, we talked about auto scaling. We looked at important parts of implementing auto scaling — naming server, load balancer, containers (Docker), and container orchestration (Kubernetes).

Posted in Information Technology, Software Engineering

Go Template Pattern With Embedded Structs

Go allows the embedding of one struct inside another, where the embedded struct methods (both value and pointer receiver) can be accessed as if they were methods of the outer struct.

As it turns out, the embedded struct doesn’t have to be a struct — it can declared as an interface, and/or the implementation can be a function instead of a struct. Declaring an embedded interface makes it possible to have a default implementation provided by a constructor function, while unit testing can provide a test implementation.

There is no limit on the number of structs that can be embedded — this is where the template pattern comes in. The basic idea is as follows:

  1. Define some interfaces for the methods (ideally, one method per interface).
  2. Define a template interface that includes all of the above interfaces.
  3. Define a template function that can operate on an instance of the template interface. It will likely need some additional parameters.
  4. Define one or more structs that implement the single method interfaces.

See this playground code for an example of the above:

One nice thing about this pattern is that it offers both reusability and flexibility:

  1. Methods that can be handled in a common way can be satisfied by embedding a struct that provides the common implementation.
  2. Methods whose operation is unique to the struct can be implemented as an ordinary method.
  3. A constructor function for a struct can set the embedded structs to implementations that are useful under real world conditions.
  4. A unit test can construct the struct by manually setting the embedded structs to stub implementations, as necessary.

Other nice features I like about this pattern include:

  1. The ability to use unexported type aliases so the struct can prevent code outside the package from changing implementations, while still satisfying the template interface needed by the algorithm.
  2. Implement all the “fiddly bits” of the pattern once in the algorithm, such as error handling code.
  3. Prevent fixing the same bugs over and over in multiple services due to making the same simple mistakes.
Posted in Software Engineering

Check Out This Metal Foam That Turns Bullets Into Dust on Impact


A new material that is lighter than metal plating but exceptional tough has just been created by researchers. The composite metal foam (CMF) is so tough that it can reduce bullets into dust upon impact.

The material was created by a North Carolina State University engineer by the name of Afsaneh Rabiei, who has been working on variations of CMFs for a few years. Her most recent research shows that the foam is able to absorb 60 to 70 percent of the total kinetic energy of a projectile—similar to the M2 demonstrated in the video—while still meeting the depth of penetration and backplate deformation rules required by bullet-proof armor.

“The indentation on the back of the CMF after the bullet strike was less than 8 mm in the latest tests,” Rabiei says. Check out a demonstration of the material in action in the video below.



This may mean that our military could soon be suiting up in armor that’s significantly more durable and also significantly lighter. In addition, Rabiei’s research shows that the CMF is effective against X-rays, gamma rays, and neutron radiation. Earlier this year, Rabiei also demonstrated how efficiently it handles heat and fire.

Beyond military applications, the CMF could be applied to construct secure nuclear waste facilities, or used for spacecrafts or even medical equipment.

The material is also non-toxic, which means it’s easier to manufacture and recycle.

Posted in News

Researchers Double WiFi Capacity Leading to Dramatically Increased Speeds

Our systems are getting smaller, and they are getting better.

It wasn’t too long ago that researchers from Columbia Engineering created full-duplex radio integrated circuits (ICs) in nanoscale CMOS that allowed transmission and reception in a wireless radio. This system used two antennas—one serving as the transmitter and the other as the receiver.

Today, another breakthrough technology in the field of telecommunications has been unveiled by the team—a similar system that only uses one antenna; which makes the entire system more compact and even more powerful, as it integrates a non-reciprocal circulator and a full-duplex radio on a nanoscale silicon chip.

“This technology could revolutionize the field of telecommunications,” says Harish Krishnaswamy, director of the Columbia High-Speed and Mm-wave IC (CoSMIC) Lab.

Krishnaswamy continues by noting the ‘first ever’ nature of this breakthrough, “our circulator is the first to be put on a silicon chip, and we get literally orders of magnitude better performance than prior work.”

And he comments on the significance, saying that “full-duplex communications, where the transmitter and the receiver operate at the same time and at the same frequency, has become a critical research area and now we’ve shown that WiFi capacity can be doubled on a nanoscale silicon chip with a single antenna. This has enormous implications for devices like smartphones and tablets.”

Silicon Radio Chips

The ability to place the circulator on the same chip as the rest radio can help to reduce the size of the system, improve performance, and it can even introduce new functionalities that are ultimately essential to a full duplex receiver.

“What really excites me about this research is that we were able to make a contribution at a theoretically fundamental level…and also demonstrate a practical RF circulator integrated with a full-duplex receiver that exhibited a factor of nearly a billion in echo cancellation, making it the first practical full-duplex receiver chip and which led to the publication in the 2016 IEEE ISSCC,” Krishnaswamy says.

The work was published in Nature Communications,