Are You Prepared For Your Next Security Audit?

by | Feb 24, 2022

What To Know About External-Facing Threats

If you were undergoing a security audit, how confidently would you be able to name the public IP address spaces you own? What about the hardware or software assets these IP addresses resolve to, or the types of services they provide to your clients? For businesses that update their internet-facing infrastructure infrequently, these could be difficult questions to answer, thus leaving room for security threats.

Why Internet-Facing?

It’s often necessary to expose applications and endpoints to the internet. Some cases of this include hosting websites and applications on public servers or accommodating employees working from home or out in the field. However, there are notable risks with public-facing IT infrastructure. With each asset exposed, the potential for misconfigured or vulnerable systems exposed to malicious actors also rises. The use of automated scanning by attackers makes their ability to find vulnerable infrastructure much more efficient. This means that the attack surface on which cyber threat actors could launch an attack grows, as seen in the increasing number of ransomware cases over the past few years (e.g., Colonial Pipeline).

To make matters worse, many organizations do not adopt a cyber defense-in-depth strategy, such as implementing network segmentation or demilitarized zones (DMZs) to add degrees of separation between external and internal networks. Despite the necessity to expose certain services, many do not implement basic firewalls to limit exposure to the appropriate external entities rather than the entire internet. Simply put, without keeping a complete and frequently updated inventory of internet-facing assets, one would not know what data is available to the public, and how attackers could get into their environments. Below are several risk areas frequently observed in cyber insurance claims that are both critical to secure in order to continue doing business across the internet:

  • Unauthenticated/unverified email origins
  • Obsolete TLS support 
  • Risky protocols and services open to the internet

Unauthenticated Email

Cyber threat actors have long used email phishing as a technique to gain initial access onto a target network, largely thanks to its simplicity and return on investment. Because many phishing attacks come from outside your organization, it’s difficult knowing if an email is coming from the person who appears to be sending it.. Someone claiming to be part of your organization may actually be an outsider looking for unauthorized access to your environment, hoping the recipient gets deceived into clicking something malicious.


To combat email spoofing and phishing attempts, implement email authentication to validate incoming email messages. This is done by configuring Sender Policy Framework (SPF) records, DomainKeys Identified Mail (DKIM) checks, and Domain-based Message Authentication, Reporting & Conformance (DMARC) policies:

  • SPF records are DNS records that specify a public list of senders that are approved to send email from your domain. Destination mail servers can check the validity of messages that claim to be from senders in your domain by consulting the SPF record within your DNS. If properly configured, it thwarts an attacker trying to send an email claiming to be from your organization’s domain.
  • DKIM ensures your email messages are not altered in transit between the sending and receiving mail servers. This is done through public-key cryptography, where the sending servers sign emails using its private key, and the recipient uses the public key that is published to DNS to verify the source and integrity of the email. Once the signature is verified, it passes DKIM and the email is considered authentic and intact.   
  • DMARC policies are DNS TXT records that build upon existing SPF and DKIM configurations. They allow you to set rules to monitor, quarantine, or reject emails from sources you do not know, and can offer metrics on your domain’s email sending activity.

Obsolete TLS 

All web applications should require strong encryption, with a minuscule number of exceptions. Although this is obviously important for applications serving up critical or sensitive information, such as personally identifiable information (PII), this does not mean that static informational content shouldn’t get the same treatment. Encryption protocols, such as Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are relatively simple solutions towards this goal. Many businesses are even adopting HTTP Strict-Transport-Security (HSTS), mandating that a web application can only be accessed through TLS/SSL.

However, older versions of these protocols become obsolete as technologies change and new vulnerabilities are discovered. Network connections employing obsolete encryption protocols are at an elevated risk of exploitation and decryption because they often implement older and cryptographically weaker cipher suites. External-facing products having to support older versions increases the attack surface unnecessarily as well as the opportunities for misconfigurations.

According to Qualys SSL Labs, there are six protocols in the SSL/TLS family, as of 2022:

  • SSL v2 is insecure and must not be used. This protocol version is so bad that it can be used to attack RSA keys and sites with the same name even if they are on entirely different servers (the DROWN attack).
  • SSL v3 is insecure when used with HTTP and weak when used with other protocols, which also makes it obsolete and should not be used. The infamous POODLE attack, which allows attackers to gradually decrypt message contents through limited guessing, exploits weaknesses in SSL v3.   
  • TLS v1.0 and TLS v1.1 are now considered legacy protocols that should not be used, but are typically still widespread in practice. They are susceptible to the BEAST attack, which allows attackers to capture and decrypt HTTPS client-server sessions and obtain authentication tokens. Both TLS v1.0 and v1.1 have been deprecated in January 2020 by modern browsers, and even by the PCI DSS compliance standard. 
  • TLS v1.2 and TLS v1.3 are both without known security issues at this time.


TLS v1.2 or TLS v1.3 should be your main protocol because these versions offer modern authenticated encryption. Older protocols should be disabled in favor of the latest, more secure versions.

Risky Ports Exposure

In any information technology environment, network ports enable devices in one network to talk to one another remotely, whether to other systems or software or even to other networks. They allow businesses to scale in ways that would not have been possible on purely physical connections. 

One way to think about network ports is to think about a fence around a home. Every fence has a gate, and that gate can be either open or closed. If the gate is left open, then anyone can enter the yard, thus putting the house at risk. Network ports work the same way. Every IP address (“home”) can have up to 65,535 different ports, and a handful of them provide common services we are used to seeing today:

  • Port 80 for web traffic (HTTP)
  • Ports 20, 21 for File Transfer Protocol (FTP)
  • Port 25 for Simple Mail Transfer Protocol (SMTP)
  • Port 53 for Domain Name System (DNS)
  • Port 443 for HTTP over TLS
  • Port 3389 for Remote Desktop Protocol (RDP)

From a security perspective, not all network ports were intended to communicate over the internet. In fact, leaving the gates “open” to the Internet for many of these network services poses a serious risk to the underlying host and network:

  • Ports 20, 21 are used for the transfer of files between a client and server on a network over the unencrypted File Transfer Protocol (FTP). If exposed to the Internet, clear-text information can be sniffed and spoofed, leading to data loss and unauthorized internal network access.
  • Port 23 is used for remote administration over Telnet, one of the oldest remote console applications today. Because of its unencrypted nature, Telnet is prone to plenty of credential-based and eavesdropping attacks.
  • Port 135 allows one program to request a service from a program located in another computer on a network, using Remote Procedure Call (RPC). It is commonly used by system administrators to remotely perform system maintenance or to use shared resources. However, it is very chatty, and exposing this service to the internet allows threat actors to easily gain enough information necessary to launch backdooring vectors onto your network. 
  • Ports 137, 138, and 139 are unauthenticated services that allow applications on computers to communicate over a local area network using the Network Basic Input/Output System (NetBIOS) protocol. When NetBIOS is exposed to the internet, attackers may be able to reach directories, files, and gather sensitive information from devices communicating to the public.
  • Port 389 communicates using the Lightweight Directory Access Protocol (LDAP), allowing clients to query a directory server within your network. When exposed to the internet, LDAP could be used by threat actors to gather and manipulate sensitive information related to your internal network, such as gathering credentials and launching brute force attacks. 
  • Port 445 provides shared access to files, printers, and serial ports between endpoints on a network using the Server Message Block (SMB) protocol. While convenient, critical SMB vulnerabilities have existed for decades. While massive spam campaigns only render a few victims that actually pay off, a worm-like infection that keeps spreading itself requires little effort for multiplying returns. And that’s exactly what the SMB vulnerabilities allow their payloads to do: spread laterally through connected systems. In fact, SMB exploits have been so successful for threat actors that they’ve been used in some of the most visible ransomware outbreaks over the last few years.
  • Port 3389 allows remote connections to a Windows system over a network using the Remote Desktop Protocol (RDP). Since RDP hosts await inbound connections on this port, even the most secure Windows installations can be profiled revealing its operating system version. Once this is known, social engineering, missing security patches, zero-day exploits, credentials on the dark web, insecure password management, etc. all could allow unauthorized access via RDP, thus should never be left exposed to the internet for access.
  • Ports 1433, 3306, and 5432 use Structured Query Language (SQL) to manage data held in relational databases. Threat actors can leverage insecure implementations of SQL over the Internet to retrieve sensitive data over database interfaces simply by sending malicious SQL queries.


Like many other cyber risks, there is no “one glove fits all” when it comes to limiting risky services from public exposure. Instead, there are layered defense strategies one can employ that, when functioning in tandem, offer efficient protections for your network overall:

  • Identify open ports by scanning your entire IT stack, including applications and any network-connected devices to learn whether the configurations are appropriate.
  • Disable the riskiest ports. The list above is a good starting point A best practice is to close any ports that do not provide necessary services for your organization. 
  • Perform network segmentation. Segment externally facing servers and services from the rest of the network with a demilitarized zone (DMZ) or a separate hosting infrastructure. This limits how far threat actors can reach in the event of a perimeter compromise.
  • Employ virtual private networks (VPNs) to tunnel external traffic. By putting remote accesses (e.g. RDP, SSH, etc.) behind VPN tunnels, it forces threat actors to authenticate to the VPN gateway without having direct access to the internal network.
  • Employ multi-factor authentication (MFA) to both remote access endpoints and to privileged resources across your network. Having an additional obstacle for an adversary in the event of a credential compromise is a strong deterrent against most attacks.

How Resilience Protects Its Insureds

For Cyber Primary Care clients, Resilience conducts Attack Surface Monitoring and provides alerts on the security risks that cyber adversaries might try and take advantage of. The Security Posture Review is available through the Resilience Platform and covers three critical areas of a client’s attack surface:

  • Email Security – Monitoring of techniques and technology to secure company email such as DKIM, DMARC, SPF & the status of your Email Security Gateway.
  • External Security – Network security rules and configurations about potential vulnerabilities of external-facing assets, internet-exposed databases, remote access hosts, and SSL/TLS versions that are deprecated or out-of-date.
  • Dark Web Exposure – Leaked or exposed company data on the Deep and Dark Web such as recently leaked plaintext passwords.

If you are interested in becoming a Cyber Primary Care customer, speak to your broker or reach out to our team

About the Author

Ken Chung
Senior Security Solutions Engineer

Ken Chung is a Senior Security Solutions Engineer with Resilience. He has 11 years of industry experience spread across offensive and defense cyber, and has worked for a number of DoD agencies, as well as JPMorgan Chase. He has exposure in signals intelligence analysis, computer network exploitation (CNE), threat intelligence, and application/operational threat modeling. Chung holds a BS in Electrical/Computer Engineering from Rutgers University, an MS in Cybersecurity from University of Maryland Global Campus, and is GIAC certified in penetration testing (GPEN) and incident response (GCIH).