Posted in Software Engineering

ALB Ingress Controller

The AWS ALB Ingress Controller for Kubernetes is a controller that triggers the creation of an Application Load Balancer (ALB) and the necessary supporting AWS resources whenever an Ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation. The Ingress resource configures the ALB to route HTTP or HTTPS traffic to different pods within the cluster. The ALB Ingress Controller is supported for production workloads running on Amazon EKS clusters.

To ensure that your Ingress objects use the ALB Ingress Controller, add the following annotation to your Ingress specification. For more information, see Ingress specification in the documentation.

annotations:
    kubernetes.io/ingress.class: alb

The ALB Ingress controller supports the following traffic modes:

  • Instance – Registers nodes within your cluster as targets for the ALB. Traffic reaching the ALB is routed to NodePort for your service and then proxied to your pods. This is the default traffic mode. You can also explicitly specify it with the alb.ingress.kubernetes.io/target-type: instance annotation.
    Note

    Your Kubernetes service must specify the NodePort type to use this traffic mode.

  • IP – Registers pods as targets for the ALB. Traffic reaching the ALB is directly routed to pods for your service. You must specify the alb.ingress.kubernetes.io/target-type: ip annotation to use this traffic mode.

For other available annotations supported by the ALB Ingress Controller, see Ingress annotations.

This topic shows you how to configure the ALB Ingress Controller to work with your Amazon EKS cluster.

To deploy the ALB Ingress Controller to an Amazon EKS cluster

  1. Tag the subnets in your VPC that you want to use for your load balancers so that the ALB Ingress Controller knows that it can use them. For more information, see Subnet Tagging Requirement. If you deployed your cluster with ekctl, then the tags are already applied.
    • All subnets in your VPC should be tagged accordingly so that Kubernetes can discover them.
      Key Value
      kubernetes.io/cluster/<cluster-name> shared
    • Public subnets in your VPC should be tagged accordingly so that Kubernetes knows to use only those subnets for external load balancers.
      Key Value
      kubernetes.io/role/elb 1
    • Private subnets in your VPC should be tagged accordingly so that Kubernetes knows that it can use them for internal load balancers:
      Key Value
      kubernetes.io/role/internal-elb 1
  2. Create an IAM OIDC provider and associate it with your cluster. If you don’t have eksctl version 0.15.0 or later installed, complete the instructions in Installing or Upgrading eksctl to install or upgrade it. You can check your installed version with eksctl version.
    eksctl utils associate-iam-oidc-provider \
        --region region-code \ --cluster prod \ --approve
  3. Create an IAM policy called ALBIngressControllerIAMPolicy for the ALB Ingress Controller pod that allows it to make calls to AWS APIs on your behalf. Use the following AWS CLI command to create the IAM policy in your AWS account. You can view the policy document on GitHub.
    aws iam create-policy \
        --policy-name ALBIngressControllerIAMPolicy \ --policy-document https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json

    Take note of the policy ARN that is returned.

  4. Create a Kubernetes service account named alb-ingress-controller in the kube-system namespace, a cluster role, and a cluster role binding for the ALB Ingress Controller to use with the following command. If you don’t have kubectl installed, complete the instructions in Installing kubectl to install it.
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml
  5. Create an IAM role for the ALB ingress controller and attach the role to the service account created in the previous step. If you didn’t create your cluster with eksctl, then use the instructions on the AWS Management Console or AWS CLI tabs.

    The command that follows only works for clusters that were created with eksctl.

    eksctl create iamserviceaccount \
        --region region-code \ --name alb-ingress-controller \ --namespace kube-system \ --cluster prod \ --attach-policy-arn arn:aws:iam::111122223333:policy/ALBIngressControllerIAMPolicy \ --override-existing-serviceaccounts \ --approve
  6. Deploy the ALB Ingress Controller with the following command.
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/alb-ingress-controller.yaml
  7. Open the ALB Ingress Controller deployment manifest for editing with the following command.
    kubectl edit deployment.apps/alb-ingress-controller -n kube-system
  8. Add a line for the cluster name after the --ingress-class=alb line. If you’re running the ALB ingress controller on Fargate, then you must also add the lines for the VPC ID, and AWS Region name of your cluster. Once you’ve added the appropriate lines, save and close the file.
        spec:
          containers:
          - args:
            - --ingress-class=alb
            - --cluster-name=prod - --aws-vpc-id=vpc-03468a8157edca5bd - --aws-region=region-code
  9. Confirm that the ALB Ingress Controller is running with the following command.
    kubectl get pods -n kube-system

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    alb-ingress-controller-55b5bbcb5b-bc8q9 1/1 Running 0 56s

To deploy a sample application

  1. Deploy the game 2048 as a sample application to verify that the ALB Ingress Controller creates an Application Load Balancer as a result of the Ingress object. You can run the sample application on a cluster that has Amazon EC2 worker nodes only, one or more Fargate pods, or a combination of the two. If your cluster has Amazon EC2 worker nodes and no Fargate pods, then select the Amazon EC2 worker nodes only tab. If your cluster has any existing Fargate pods, or you want to deploy the application to new Fargate pods, then select the Fargate tab. For more information about Fargate pods, see Getting Started with AWS Fargate on Amazon EKS .

    Deploy the application with the following commands.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-namespace.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-deployment.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-service.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-ingress.yaml
  2. After a few minutes, verify that the Ingress resource was created with the following command.
    kubectl get ingress/2048-ingress -n 2048-game

    Output:

    NAME           HOSTS   ADDRESS                                                                 PORTS      AGE
    2048-ingress   *       example-2048game-2048ingr-6fa0-352729433.region-code.elb.amazonaws.com 80 24h
    Note

    If your Ingress has not been created after several minutes, run the following command to view the Ingress controller logs. These logs may contain error messages that can help you diagnose any issues with your deployment.

    kubectl logs -n kube-system   deployment.apps/alb-ingress-controller
  3. Open a browser and navigate to the ADDRESS URL from the previous command output to see the sample application.
    
                    2048 sample application
  4. When you finish experimenting with your sample application, delete it with the following commands.
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-ingress.yaml
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-service.yaml
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-deployment.yaml
    kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/d
Posted in Software Engineering

Kubernetes Liveness and Readiness Probes

Kubernetes uses liveness probes to know when to restart a container. If a container is unresponsive—perhaps the application is deadlocked due to a multi-threading defect—restarting the container can make the application more available, despite the defect. It certainly beats paging someone in the middle of the night to restart a container.[1]

Kubernetes uses readiness probes to decide when the container is available for accepting traffic. The readiness probe is used to control which pods are used as the backends for a service. A pod is considered ready when all of its containers are ready. If a pod is not ready, it is removed from service load balancers. For example, if a container loads a large cache at startup and takes minutes to start, you do not want to send requests to this container until it is ready, or the requests will fail—you want to route requests to other pods, which are capable of servicing requests.

At the time of this writing, Kubernetes supports three mechanisms for implementing liveness and readiness probes: 1) running a command inside a container, 2) making an HTTP request against a container, or 3) opening a TCP socket against a container.

A probe has a number of configuration parameters to control its behaviour, like how often to execute the probe; how long to wait after starting the container to initiate the probe; the number of seconds after which the probe is considered failed; and how many times the probe can fail before giving up. For a liveness probe, giving up means the pod will be restarted. For a readiness probe, giving up means not routing traffic to the pod, but the pod is not restarted. Liveness and readiness probes can be used in conjunction.

 

Readiness Probes

The Kubernetes documentation, as well as many blog posts and examples, somewhat misleadingly emphasizes the use of the readiness probe when starting a container. This is usually the most common consideration—we want to avoid routing requests to the pod until it is ready to accept traffic. However, the readiness probe will continue to be called throughout the lifetime of the container, every periodSeconds, so that the container can make itself temporarily unavailable when one of its dependencies is unavailable, or while running a large batch job, performing maintenance, or something similar.

If you do not realize that the readiness probe will continue to be called after the container is started, you can design readiness probes that can result in serious problems at runtime. Even if you do understand this behaviour, you can still encounter serious problems if the readiness probe does not consider exceptional system dynamics. I will illustrate this through an example.

The following application, implemented in Scala using Akka HTTP, loads a large cache into memory, at startup, before it can handle requests. After the cache is loaded, the atomic variable loaded is set to true. If the cache fails to load, the container will exit and be restarted by Kubernetes, with an exponential-backoff delay.

object CacheServer extends App with CacheServerRoutes with CacheServerProbeRoutes {
  implicit val system = ActorSystem()
  implicit val materializer = ActorMaterializer()
  implicit val executionContext = ExecutionContext.Implicits.global

  val routes: Route = cacheRoutes ~ probeRoutes

  Http().bindAndHandle(routes, "0.0.0.0", 8888)

  val loaded = new AtomicBoolean(false)

  val cache = Cache()
  cache.load().onComplete {
    case Success(_) => loaded.set(true)
    case Failure(ex) =>
      system.terminate().onComplete {
        sys.error(s"Failed to load cache : $ex")
      }
  }
}

The application uses the following /readiness HTTP route for the Kubernetes readiness probe. If the cache is loaded, the /readiness route will always return successfully.

trait CacheServerProbeRoutes {
  def loaded: AtomicBoolean

  val probeRoutes: Route = path("readiness") {
    get {
      if (loaded.get) complete(StatusCodes.OK)
      else complete(StatusCodes.ServiceUnavailable)
    }
  }
}

The HTTP readiness probe is configured as follows:

spec:
  containers:
  - name: cache-server
    image: cache-server/latest
    readinessProbe:
      httpGet:
        path: /readiness
        port: 8888
      initialDelaySeconds: 300
      periodSeconds: 30

This readiness-probe implementation is extremely reliable. Requests are not routed to the application before the cache is loaded. Once the cache is loaded, the /readiness route will perpetually return HTTP 200 and the pod will always be considered ready.

Contrast this implementation with the following application that makes HTTP requests to its dependent services as part of its readiness probe. A readiness probe like this can be useful for catching configuration issues at deployment time—like using the wrong certificate for mutual-TLS, or the wrong credentials for database authentication—ensuring that the service can communicate with all of its dependencies, before becoming ready.

trait ServerWithDependenciesProbeRoutes {
  implicit def ec: ExecutionContext

  def httpClient: HttpRequest => Future[HttpResponse]

  private def httpReadinessRequest(
    uri: Uri,
    f: HttpRequest => Future[HttpResponse] = httpClient): Future[HttpResponse] = {
    f(HttpRequest(method = HttpMethods.HEAD, uri = uri))
  }

  private def checkStatusCode(response: Try[HttpResponse]): Try[Unit] = {
    response match {
      case Success(x) if x.status == StatusCodes.OK => Success(())
      case Success(x) if x.status != StatusCodes.OK => Failure(HttpStatusCodeException(x.status))
      case Failure(ex) => Failure(HttpClientException(ex))
    }
  }

  private def readinessProbe() = {
    val authorizationCheck = httpReadinessRequest("https://authorization.service").transform(checkStatusCode)
    val inventoryCheck = httpReadinessRequest("https://inventory.service").transform(checkStatusCode)
    val telemetryCheck = httpReadinessRequest("https://telemetry.service").transform(checkStatusCode)

    val result = for {
      authorizationResult <- authorizationCheck
      inventoryResult <- inventoryCheck
      telemetryResult <- telemetryCheck
    } yield (authorizationResult, inventoryResult, telemetryResult)

    result
  }

  val probeRoutes: Route = path("readiness") {
    get {
      onComplete(readinessProbe()) {
        case Success(_) => complete(StatusCodes.OK)
        case Failure(_) => complete(StatusCodes.ServiceUnavailable)
      }
    }
  }
}

These concurrent HTTP requests normally return extremely quickly—on the order of milliseconds. The default timeout for the readiness probe is one second. Because these requests succeed the vast majority of the time, it is easy to naively accept the defaults.

But consider what happens if there is a small, temporary increase in latency to one dependent service—maybe due to network congestion, a garbage-collection pause, or a temporary increase in load for the dependent service. If latency to the dependency increases to even slightly above one second, the readiness probe will fail and Kubernetes will no longer route traffic to the pod. Since all of the pods share the same dependency, it is very likely that all pods backing the service will fail the readiness probe at the same time. This will result in all pods being removed from the service routing. With no pods backing the service, Kubernetes will return HTTP 404, the default backend, for all requests to the service. We have created a single point of failure that renders the service completely unavailable, despite our best efforts to improve availability.[2] In this scenario, we would deliver a much better end-user experience by letting the client requests succeed, albeit with slightly increased latency, rather than making the entire service unavailable for seconds or minutes at a time.

If the readiness probe is verifying a dependency that is exclusive to the container—a private cache or database—then you can be more aggressive in failing the readiness probe, with the assumption that container dependencies are independent. However, if the readiness probe is verifying a shared dependency—like a common service used for authentication, authorization, metrics, logging, or metadata—you should be very conservative in failing the readiness probe.

My recommendations are:

  • If the container evaluates a shared dependency in the readiness probe, set the readiness-probe timeout longer than the maximum response time for that dependency.
  • The default failureThreshold count is three—the number of times the readiness probe needs to fail before the pod will no longer be considered ready. Depending on the frequency of the readiness probe—determined by the periodSeconds parameter—you may want to increase the failureThreshold count. The idea is to avoid failing the readiness probe, prematurely, before temporary system-dynamics have elapsed and response latencies have returned to normal

 

Liveness Probe

Recall that a liveness-probe failure will result in the container being restarted. Unlike a readiness probe, it is not idiomatic to check dependencies in a liveness probe. A liveness probe should be used to check if the container itself has become unresponsive.

One problem with a liveness probe is that the probe may not actually verify the responsiveness of the service. For example, if the service hosts two web servers—one for the service routes and one for status routes, like readiness and liveness probes, or metrics collection—the service can be slow or unresponsive, while the liveness probe route returns just fine. To be effective, the liveness probe must exercise the service in a similar manner to dependent services.

Similar to the readiness probe, it is also important to consider dynamics changing over time. If the liveness-probe timeout is too short, a small increase in response time—perhaps caused by a temporary increase in load—could result in the container being restarted. The restart may result in even more load for other pods backing the service, causing a further cascade of liveness probe failures, making the overall availability of the service even worse. Configuring liveness-probe timeouts on the order of client timeouts, and using a forgiving failureThreshold count, can guard against these cascading failures.

A subtle problem with liveness probes comes from the container startup-latency changing over time. This can be a result of network topology changes, changes in resource allocation, or just increasing load as your service scales. If a container is restarted—due to a Kubernetes-node failure, or a liveness-probe failure—and the initialDelaySeconds parameter is not long enough, you risk never starting the application, with it being killed and restarted, repeatedly, before completely starting. The initialDelaySeconds parameter should be longer than maximum initialization time for the container. To avoid surprises from these dynamics changing over time, it is advantageous to have pods restart on a somewhat regular basis—it should not necessarily be a goal to have individual pods backing a service run for weeks or months at a time. It is important to regularly exercise and evaluate deployments, restarts, and failures as part of running a reliable service.

My recommendations are:

  • Avoid checking dependencies in liveness probes. Liveness probes should be inexpensive and have response times with minimal variance.
  • Set liveness-probe timeouts conservatively, so that system dynamics can temporarily or permanently change, without resulting in excessive liveness probe failures. Consider setting liveness-probe timeouts the same magnitude as client timeouts.
  • Set the initialDelaySeconds parameter conservatively, so that containers can be reliably restarted, even as startup dynamics change over time.
  • Regularly restart containers to exercise startup dynamics and avoid unexpected behavioural changes during initializatio

 

Conclusion

Kubernetes liveness and readiness probes can greatly improve the robustness and resilience of your service and provide a superior end-user experience. However, if you do not carefully consider how these probes are used, and especially if you do not consider extraordinary system dynamics, however rare, you risk making the availability of the service worse, rather than better.

You may think that an ounce of prevention is worth a pound of cure. Unfortunately, sometimes the cure can be worse than the disease. Kubernetes liveness and readiness probes are designed to improve reliability, but if they are not implemented considerately, Lorin’s Conjecture applies:

Once a system reaches a certain level of reliability, most major incidents will involve:

  • A manual intervention that was intended to mitigate a minor incident, or
  • Unexpected behaviour of a subsystem whose primary purpose was to improve reliability
Posted in Security

Enumeration

ENUMERATION

Enumeration Is More Likely A Internal Process Than External. In This Process The Attacker Establishes An Active Connection With The Victim And Tries To Discover As Much Attack Vectors As Possible, Which Can Be Used To Exploit The Systems Further.

Many Of The Protocols Do Not Encrypt Data While Traveling Across The Network So We Can Sniff In A Network In Order To Gather More Data.

Enumeration Is Used To Gather Following Data :

   • Running Services
• Service Version
• Hostnames
• IP Adresses
• Operating System
• Network Resources
• SNMP Data
• IP Tables
• Passwords Policies
• Users and Groups
• Networks and shared paths
• Route Tables
• Applications and Banners
• Points Of Entry

Enumeration Depends On The Services That The Systems Offer, Following Services Is Used To Enumerate System.

NTP Enumeration 

Network Time Protocol Is A Protocol Designed To Synchronize Clocks Of Networked Computers. Now Obviously These Computers Talking To Each Other To Synchronize Their Time, Which Opens Them Up For Enumeration. NTP Uses UDP Port 123.

Hosts & IP Adresses
Sys Name & OS

nmap -sU -p 123 –script=ntp-info <target>

Monlist Is A Remote Command In Older Version Of NTP That Sends The Requester A List Of The Last 600 Hosts Who Have Connected To That Server. For Attackers The Monlist Query Is A Great Reconnaissance Tool. For A Localized NTP Server It Can Help To Build A Network Profile.

nmap -sU -p 123 -Pn -n –script=ntp-monlist <target>

SNMP Enumeration 

The Simple Network Management Protocol Is Used To Manage And Monitor Hardware Devices Connected To A Network. If Passwords Are Not Changed They Can Be Used By An Attacker To Enumerate SNMP As SNMP Manager.

User Accounts And Devices

nmap -sU -p 137 –script snmp-brute <target>

SMB Enumeration 

The Server Message Block Protocol Is a Network File Sharing Protocol That Allows Applications on a Computer to Read and Write to Files and to Request Services From Server Programs in a Computer Network. The SMB Protocol Can Be Used on Top of Its TCP/IP Protocol or Other Network Protocols. Using the SMB Protocol, User Can Access Files or Other Resources at a Remote Server. This Allows Applications to Read, Create, and Update Files on the Remote Server. It Can Also Communicate With Any Server Program That Is Set Up to Receive an SMB Client Request.

nmap -p 445 –script smb-os-discovery 192.168.1.0/24
nmap -sV -p 445 –script smb-brute 192.168.1.101

Active Directory Enumeration 

Enumerating Windows Active Directory Via LDAP (Lightweight Directory Access Protocol), TCP/UDP 389 And 3268. Active Directory Contains User Accounts And Additional Information About Accounts On Windows PC’s.

ad-ldap-enum

BGP Enumeration 

BGP Is designed to exchange routing and reachability information among autonomous systems (AS) on the Internet.
BGP Is Used By Routers To Help Them Guide Packets To Their Destinations. It makes routing decisions based on paths, network policies, or rule-sets configured by a network administrator and is involved in making core routing decisions. It Can Be Used To Find All The Networks Associated With A Particular Corporation.

nmap –script asn-query –script-args dns=8.8.8.8 <target>

BGPSimple


SMTP Enumeration 

SMTP Is A Protocol Which Is Used To Deliver Emails Across The Internet, SMTP Protocol Moves Your Email Using DNS MX Records To Identify Server That It Needs To Forward Or Store An Email, It Also Works Very Closely With MTA (Mail Transfer Agents) To Make Sure It Sends An Email To Right Computer As Well As Right Email Inbox.

Once The Email Gets Inside Of Our Network It Typically Uses Protocol (POP, IMAP) To Deliver The Email Internally, But Externally On The Internet It Uses SMTP.
By Simply Sending An Email Message To A Non-Existent Address At A Target Domain Often Reveals Useful Internal Network Information Through A Non-Delivery Notification (NDN).

Default Ports: 

 
SMTP Server (Outgoing Messages) 
Non-Encrypted – AUTH – Port: 25 (or 587)
Secure (TLS) – StartTLS – Port: 587
Secure (SSL) – SSL – Port: 465
POP3 Server (Incoming Messages) 
Non-Encrypted – AUTH – Port: 110
Secure (SSL) – SSL – Port: 995
VRFY  Verify, If A User Exists
EXPN  Expand, Show All Recipients Of An address
RCPT TO  Sets The Destination Address Of The Email

FINGER Enumeration 

Finger Is A Utility In Linux Operating System, By Using It You Can Check The Information Of Any User From Remote Or Local Command Line Interface.

TFTP Enumeration 

Trivial File Transport Protocol Uses Udp Which Is Not Secure. It Pros And Sys Admins Typically Use TFTP For Transfer Files Remotely , Remotetly Boot System And Backup Conf, Files. By Sniffing On A TFTP Traffic We Can All Of This.

RPC Enumeration 

Remote Procedure Call (RPC) Is A Protocol That One Program Can Use To Request A Service From A Program Located In Another Computer On A Network Without Having To Understand The Network’s Details. By Querying An MSRPC Endpoint We Can Get List Of Services That Is Running On The Target System.

NetBIOS Enumeration 

NetBIOS (Network Basic Input/Output System) Is A Program That Allows Applications On Different Computers To Communicate Within A Local Area Network (LAN). NetBIOS Software Runs On Port 139 On Windows Operating System. File And Printer Service Needs To Be Enabled To Enumerate NetBIOS.
SSH Enumeration 

Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network.

osueta
ssh_user_enum

nmap –script ssh2-enum-algos target

OS Fingerprinting 

 
The Process Of Determining The Operating System Used By A Host On A Network. By Analysing Packets From A Host On A Network By Following :
 IP TTL values
 IP ID values
 TCP Window size
 TCP Options SYN and SYN+ACK
 DHCP Requests
 ICMP Requests
 HTTP Packets (User-Agent field).
 Running Services
 Open Port Patterns.
 
Banner Grabbing
 
Banner Grabbing Provides Important Information About What  Type And Version Of Software Is Running. Telnet Is An Easy Way To Do This Banner Grabbong For FTP, SMTP, HTTP, And Others.
Telnet <Target>

Some Extra Enumration  Scripts : 

HOSTMAP : nmap -p 80 –script hostmap-bfk.nse scanme.nmap.org
TRACEROUTE : nmap –traceroute –script traceroute-                            geolocation.nse -p 80  scanme.nmap.org

FTP : nmap –script ftp-brute -p 21

      ftp <ftp server ip/ftp.server.com>
SSH  : nmap –script ssh2-enum-algos <target>

HTTP : nmap -sV –script=http-userdir-enum <target>
nmap –script http-enum 192.168.1.52
nmap –script -http-enum –script-args http-                      enum.basepath=’pub/’ 192.168.1.52
nmap –script http-title -sV -p 80 192.168.1.0/24

TELNET : nmap -p 23 <ip> –script telnet-encryption

TFTP : nmap -sU -p 69 –script tftp-enum.nse –script-args
tftp-enum.filelist=customlist.txt <host>

RPN : nmap –script=msrpc-enum <target>
Enumeration Countermeasures
  • Disable Directory Indexing That Don’T Contain Index.Html Or (Default.Asp).
  • Use Robots.txt To  Prevent Indexing On Search Engine.
  • Use A Centralized Network Admin Contact Detail  In WHOIS Databases  To Prevent Social Engineering Attacks.
  • Disallow DNS Zone Transfers To Untrusted Hosts
  • Remove Nonpublic IP Address And Hostname Details In DNS Zone Files
  • PTR Records Should Only Be Used If Absolutely Needed (For SMTP Mail Servers And Other CriticaL Systems That Need To Resolve Both Ways).
  • Ensure That Unnecessary Records (E.G. HINFO) Don’T Appear In DNS Zone Files.
  • Configure SMTP Servers To Not Send Non-Delivery Notifications To Prevent Attackers From Enumerating The Internal Mail Servers And Configuration.
  • Consider And Review Your IPV6 Networks And DNS Configuration (If Any).
Tools :

Linux :

Windows :

SuperScan

WinFingerprint

IP-Tools

NetBIOS Enumerator

Hyena

Sid2Username (User2SID, SID2User)

JXplorer (Open source LDAP Browser)

MIB Browser (SNMP Managment Information Base)