Security+ Boot Camp Study Guide






[* Security+ Study Notes *]


The Cyber Security industry is experiencing a massive talent shortage. Over one million information security jobs are expected to go unfilled.


When you have familiarized yourself with the 55 pages of this document I recommend you check out my Security+ Certification Boot Camp. It’s truly world class. There are currently over 6,000 students enrolled in my security courses across 125 countries! I can help you get certified and get ahead!


The Security+ Security Boot Camp!

[* Includes bank of over 200 Security+ exam prep questions *]

[* Hands-On Security+ Labs accessible from any device  *]

[* 55 Pages of Security+ Exam Study Notes  *]

[* 11+ hours of Security+ Training Video  *]

Enter discount code “SECURITYPLUSNOW” at checkout.

[+ Click Here to Check Out the Security+ Boot Camp! +]






Domain 1 – Network Security


Networking 101


The OSI in the OSI Model stands for Open Systems Interconnect. This is a standard model that defines how open systems can communicate.


The TCP/IP model forms a subset of the OSI and represents a practical implementation of what the OSI model prescribes.

The physical layer is responsible for actually moving the bits around in the form of electronic pulses over wires or via wireless radio waves. The Link Layer is responsible for transferring data between a pair of endpoints on the network and to detect errors at the Physical layer. Ethernet is a common link layer protocol for wired communications much like LTE is a standard for wireless cell data transfer. The Network layer of the model is responsible for routing traffic beyond local area networks over the internet. The Transport layer is managed via TCP and UDP. TCP (Transmission Control Protocol) is a connection oriented protocol that ensures packet delivery. UDP (The User Datagram Protocol) is a connectionless protocol that is a best effort communication mechanism. The Application Layer provides protocols that exchange the actual payload data between useful programs such as browsers, e-mail servers, web servers and the like.



Modern network topographies consist of Core, Distribution and Access Layers. The core layer is a switching layer that moves data quickly in a LAN environment. Hanging off of core switches you typically have routers which segment IP networks and within IP networks you have switches with VLANs that partition off broadcast domains in Ethernet networks.


Network Address Translation is a methodology of remapping one IP address space into another by modifying network address information in Internet Protocol (IP) datagram packet headers while they are in transit across a traffic routing device. The immediate benefit of NAT is that it allows a single internet connection with a single IP address to be shared.


LAN Security


A virtual LAN (VLAN) is a broadcast domain that is segmented in a computer network at the data link layer.


The Spanning Tree Protocol (STP) is a network protocol that allows for a loop-free topology for Ethernet networks which have redundant physical connections.



VACL’s provide for access control of packets within and between VLAN’s.


Terminal Access Controller Access-Control System (TACACS, usually pronounced like tack-axe) refers to a family of related protocols handling remote authentication and related services for networked access control through a centralized server. The original TACACS protocol, which dates back to 1984, was used for communicating with an authentication server, common in older UNIX networks; it spawned related protocols:


Extended TACACS (XTACACS) is a proprietary extension to TACACS introduced by Cisco Systems in 1990 without backwards compatibility to the original protocol. TACACS and XTACACS both allow a remote access server to communicate with an authentication server in order to determine if the user has access to the network.

Terminal Access Controller Access-Control System Plus (TACACS+ ) is a protocol developed by Cisco and released as an open standard beginning in 1993. Although derived from TACACS, TACACS+ is a separate protocol that handles authentication, authorization, and accounting (AAA) services. TACACS+ and other flexible AAA protocols have largely replaced their predecessors.


Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized Authentication, Authorization, and Accounting (AAA or Triple A) management


Software-defined networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of higher-level functionality.


The Smurf Attack is a distributed denial-of-service attack in which large numbers of Internet Control Message Protocol (ICMP) packets with the intended victim’s spoofed source IP are broadcast to a computer network using an IP Broadcast address




When talking about firewalls it is important to take session context into account. This will primarily define the difference between ‘Tasteful’ and ‘Stateless’ firewalls which we will cover in this section.


Initially the first firewalls were stateless, meaning that they didn’t not track any TCP or UDP session information relative to inbound or outbound traffic. They were essentially just simple packet filters which only examined source and target IP addresses and ports to make decisions about what traffic is or isn’t allowed either inbound or outbound.

So why is this important? Well, let’s say I’m behind the firewall and make a request to a website. The firewall may very well be configured to allow me outbound on port 80 but it will likely be configured to not allow inbound traffic on the negotiated port for security reasons. So what are the ramifications here? Well the web server response packets will be dropped by the firewall and the client making a legitimate request will not receive a response. Not a very functional solution. These first generation firewalls were very primitive in nature.



In contrast to Stateless Firewalls, Stateful Firewalls actually track session state information. Specifically, the existence of a TCP session. Therefore, if we hearken back to our previous example, if a user made a request outbound on a stateful firewall on port 80 the packet would still make it outbound successfully just like it would have with the stateless firewall. The difference here would be specific to the server’s response to the request. Even though the packet may be blocking inbound traffic on all ports for example, it would allow the inbound response because it recognizes that is a response to a legitimate request that initiated from inside the firewall.

This capability is referred to as ‘stateful packet inspection’.


Application layer firewalls represent the newest generation of firewall capabilities. Where stateful firewalls are primarily operation from the physical layer of the OSI model up to the session layer, Application layer firewalls have the ability to inspect all seven layers.



Intrusion Detection Systems


Intrusion Detection technology is offered in multiple flavors. They are either network based or host based and can be detective or preventive in nature.


NIDS was an early iteration of the technology intended to supplement firewalls by providing deep packet inspection which could either identify statistical anomalies or signature based activities.


NIPS is very similar to NIDS but is deployed inline much like a firewall in order to actual drop packets preventatively.


HIDS or (Host Intrusion Detection Services) typically functions as an agent that is running on a server that is ‘listening’ for malicious activities.


Types of port scans:

Vertical – multiple ports on same host

Horizontal – multiple hosts same port


Hybrid – combination of the above


A clandestine user is a user with privileges that tries to cover up their tracks.


Misfeasors are users (typically insiders) who have permissions but are misusing their access in some way.


A Masquerader is a person who tries to impersonate an authorized user and are typically outsiders.


False positives are very common in IDS systems. A false positive is any reading that falsely identifies anomalous or malicious behavior.


Statistical based IDS systems establish baselines by being set to ‘listen’ or monitor ‘normal’ traffic patterns for a particular period of time. Once a baseline is established the IDS will look for traffic patterns that fall outside of the established baseline.



Transport Layer Security


SSL is an Application Layer independent protocol but is typically used to secure HTTP.

The SSL Handshake Protocol Establishes the secure channel between the client and the server and provides the keys and the algorithm information to the SSL Record Protocol.


TLS is sometimes referred to as SSLv3.1. So it is more or less an advancement of SSLv3. TLS supersedes SSL 2.0 and should be used in new development. SSL has a lot of bugs and is susceptible to known attacks such as the POODLE Attack. The latest version of TLS is TLSv1.2 .



The SSH Transport Layer Protocol handles initial key exchange as well as server authentication, and sets up encryption, compression and integrity verification. It exposes to the upper layer an interface for sending and receiving plaintext packets.


The User Authentication layer handles client authentication and provides a number of authentication methods. Authentication methods include password, public key and keyboard interactive. It supports GSSAPI authentication methods which allows for authentication extensibility for mechanisms such as Kerberos v5 or NTLM.


The SSH Connection Protocol defines channels and channel requests. Standard channel types include shell, direct-tcpip for local port forwarding and forwarded tcp-ip for remote port forwarding.





IPsec Notes


Security Protocol that can provide encapsulation, encryption and authentication services over public networks.


Forms the base protocol for many VPN technologies in use today.

Can be orchestrated through the use of security policies.


Authentication Header (AH) is a member of the IPsec protocol suite. AH guarantees connectionless integrity and data origin authentication of IP packets. Further, it can optionally protect against replay attacks by using the sliding window technique and discarding old packets


Encapsulating Security Payload (ESP) is a member of the IPsec protocol suite. In IPsec it provides origin authenticity, integrity and confidentiality protection of packets.


In transport mode, only the payload of the IP packet is usually encrypted and/or authenticated. The routing is intact, since the IP header is neither modified nor encrypted; however, when the authentication header is used, the IP addresses cannot be translated.


In tunnel mode, the entire IP packet is encrypted and/or authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to create virtual private networks for network-to-network communications.


The concept of a security association (SA) is fundamental to IPsec. An SA is a relationship between two or more entities that describes how the entities will use security services to communicate securely.


Network Access Control


Network Access Control allows for advanced LAN and VPN connection controls which help to ensure that clients that are requesting to connect to the trusted network are secure


NAC Components:


p<>{color:#000;}. Policy Server

p<>{color:#000;}. Access Requestor – (Supplicants)

p<>{color:#000;}. Network Access Server


802.1x – Network Access Control standard that defines the encapsulation of EAP.

Each protocol that uses EAP defines a way to encapsulate EAP messages within that protocol’s messages.


802.1X is a port based authentication and access control protocol that allows for authentication & authorization of wired and wireless devices. 802.1X enforces policy compliance, controlling port access and tracking users.



With 802.1X port-based enforcement, the NPS server instructs an 802.1X authenticating switch or an 802.1X-compliant wireless access point to place noncompliant 802.1X clients on a remediation network. The NPS server limits network access by the client to the remediation network by applying IP filters or a virtual LAN identifier to the connection. 802.1X enforcement provides strong network restriction for all computers accessing the network by using 802.1X-capable network access servers.



VPN Based NAC’s are a bit more proprietary in nature in enforce NAC controls. VPN based NAC’s can however invoke 802.1x but don’t have to in order to achieve their desired results.


Advanced IDS


It’s important to have standard protocol and message format models for different IDS systems to be able to communicate with one another and with other systems in a predictable way.

The IDS Message Exchange Requirements standard describes a data model to represent information exported by IDS which is XML based. This is outlines in RFC 4766.

The IDS Exchange Protocol (IDXP) is an app level protocol for exchanging data between IDS’s. IDXP supports mutual authentication integrity and confidentiality over a connection oriented protocol (such as TCP).



There are two types of IDS systems. The first is Statistical Anomaly Detection and the second is rule based IDS.Statistical Anomaly Detection based IDS involves the use of baselines which identify sets of ‘normal’ network behaviors and traffic. In order to make these baseline measurements several statistical model types are employed.


Rule based IDS takes a historical approach by gathering historical access information and detecting usage patterns and deviations from those patterns. These could be patterns of users, programs, privileges, timeslots, terminals etc. Current behavior is observed and then matched against rules to determine conformity. Another type of IDS model involves the use of traffic signatures to analyze network traffic.








Domain 2 – Compliance and Operational Security


Principles of Security


The foundational principles of information security which include confidentiality, integrity and availability. We will also identify common security services and the mechanisms used to implement those services.


Confidentiality refers to the privacy of the data. For instance, when I access my bank account online I have an expectation of confidentiality. Others should not be able to see my bank account information over the network.


When we talk about data integrity we imply that we have an expectation that the data has not been manipulated in any way. Using the online banking example integrity might refer to the fact that I have an expectation that my bank account information has not been changed in any way outside of legitimate banking transactions.


Availability infers that we should have reasonable access to the data. Hackers could impact the availability of my bank account data for instance by conducting a DDoS (Distributed Denial of Service) Attack against my banking providers web site thus preventing me from accessing my account.




Authentication Services intend to associate online activities with a user’s identity.


Authorization Services – empower users with particular privileges based on their identity. These privileges represent activities that users can perform on a given system. Typically, these are CRUD (Create, Write, Update, Delete) on certain datasets


Audit Services – Auditing services track activities of users and through detective mechanisms administrators can identify if unauthorized access or changes to the system or the data have taken place. Examples of auditing mechanisms include Linux system logs, Windows Operating System logs, firewall logs and application logs. In order for these types of controls to be effective there should be either automated or manual review of the logs to address any unauthorized actions.

Network Security Services such as firewalls can provide for availability by blocking DoS traffic. Sometimes these services are provided at the ISP level.


Physical Security Services can help ensure confidentiality, integrity and availability. If unauthorized personnel with malicious intent accessed the physical servers they could bring them offline (availability), copy the data (confidentiality) or even change the data (integrity).



HR and Personnel Security


People are arguably the most important link in the chain when it comes to security. Making sure employees, contractors and business partners protect corporate data and that their privacy is protected as well is the responsibility of all involved.


An insider threat is a malicious threat to an organization that comes from people within the organization, such as employees, former employees, contractors or business associates, who have inside information concerning the organization’s security practices, data and computer systems.


Background checks are often requested by employers on job candidates for employment screening, especially on candidates seeking a position that requires high security or a position of trust, such as in a school, hospital, financial institution, airport, and government.


Employees typically must relinquish some of their privacy while at the workplace, but how much they must do so can be a contentious issue.


An Acceptable Use Policy (AUP), acceptable usage policy or fair use policy, is a set of rules applied by the owner, creator or administrator of a network, website, or service, that restrict the ways in which the network, website or system may be used and sets guide lines as to how it should be used.


Best practices for employees to secure data when traveling is to not place devices in checked baggage and to keep them on their person. Shield passwords from view. Clear your browser cache and avoid Wi-Fi networks. When you return you should change your password as a best practice.


In data governance groups, responsibilities for data management are increasingly divided between the business process owners and information technology (IT) departments. Two functional titles commonly used for these roles are Data Steward and Data Custodian.

Data Stewards are commonly responsible for data content, context, and associated business rules. Data Custodians are responsible for the safe custody, transport, storage of the data and implementation of business rules. Often Data Stewards are also described as Data Owners.

Simply put, Data Stewards are responsible for what is stored in a data field, while Data Custodians are responsible for the technical environment and database structure. Common job titles for data custodians are Database Administrator (DBA), Data Modeler, and ETL Developer.



Data Privacy


Data Privacy is a key consideration for business as they leverage personal information for business purposes.


The HIPAA Privacy regulations require health care providers and organizations, as well as their business associates, develop and follow procedures that ensure the confidentiality and security of protected health information (PHI) when it is transferred, received, handled, or shared.  This applies to all forms of PHI, including paper, oral, and electronic, etc.




1 . Remove sensitive authentication data and limit data

retention — This milestone targets key risk areas for

those who have been compromised—if you don’t need

it, don’t store it.


2 . Protect systems and networks — Be prepared to

respond to a system breach – this milestone targets

points of access to most compromises, and response



3 . Secure payment card applications — Controls for

applications, application processes, and application

servers have been shown to be easy prey when

weaknesses exist.


4 . Monitor and control access to your systems — This

milestone provides controls to allow you to detect the

who, what, when, and how of who is accessing your

network and cardholder data environment. A blind spot

for many who have been compromised.


5 . Protect stored cardholder data — If you must store

Primary Account Numbers (PAN), this milestone targets

key protection mechanisms for that stored data.


6 . Finalize remaining compliance efforts and ensure all

controls are in place.




There are two types of expectations of privacy:

Subjective expectation of privacy – a certain individual’s opinion that a certain location or situation is private; varies greatly from person to person


Objective, legitimate, reasonable expectation of privacy – An expectation of privacy generally recognized by society.


TOR – The Onion Router. Allows users to browse the web anonymously through peer based network encapsulation.


Risk Management


Risk Management is what fundamentally drives information security programs.


Much like technology doesn’t exist for technology’s sake, security doesn’t exist for security’s sake. Like technology, security in and of itself is a means to an end. We have to ask ourselves what are we trying to secure? What is it worth? How can these things be compromised? How much does it cost to protect it? What are the ramifications of these assets being compromised? What effect does it have on a company’s short term and long term top-line revenue growth?


This is what Risk Management is all about. Identifying assets, measuring their tangible and intangible values, taking regulatory and corporate compliance into account, identifying mitigating controls and costs associated with those controls.



Quantitative risk assessment comes into play when we have the ability to map a dollar amount to a specific risk.


p<>{color:#000;}. Likelihood

p<>{color:#000;}. SLE – (Single Loss Expectancy) Is the monetary value expected from the occurrence of a risk on an asset. It is related to risk management and risk assessment.

p<>{color:#000;}. Single-loss expectancy is mathematically expressed as:

p<>{color:#000;}. ARO – Annualized Rate of Occurrence. Otherwise known as threat frequency.

p<>{color:#000;}. MTTR – Mean Time to Repair

The impact scale is organizationally defined (for example, a one to five scale, with five being the highest impact on project objectives – such as budget, schedule, or quality).

p<>{color:#000;}. MTTF Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a system during operation. MTBF can be calculated as the arithmetic mean (average) time between failures of a system.




Some examples of policies that can help mitigate risk include:


p<>{color:#000;}. Privacy policy

p<>{color:#000;}. Acceptable Use Policy

p<>{color:#000;}. Data Classification Policy

p<>{color:#000;}. Mandatory Vacations

p<>{color:#000;}. Job rotation

p<>{color:#000;}. Separation of Duties

p<>{color:#000;}. Least Privilege

Guidelines are another tool that are a bit softer than policies. A security policy is a ‘thou shalt’ and ‘thou shalt not’ type construct. Guidelines are more along the lines of best practices recommendations.


Physical Security


Physical security measures may not be the most interesting part of information security but they are an important component of the bigger picture. Because if intruders have access to your HW it’s only a matter of time before they get access to the data that’s on it.


Put simply the primary objectives of security controls are to Deter, Detect, Delay, Assess, Respond intruders.


HW Locks, Mantraps, Video Surveillance, Fencing, Proximity Readers, Access Lists
Proper Lighting, Signs, Guards, Barriers, Biometrics, Media Storage Facilities
Protected cabling, Alarms


Know These Fencing Heights:

3 ft – 4 ft High Deters Casual Trespassers

6 ft – 8 ft High Too Hard to Climb Easily

8 ft High with
3 Strands of
Barbed Wire Deters Intruders

3 Types of Fencing

Chain Link

Barbed Wire




Sensor Types:

Wave Pattern – Generates a Frequency Wave Pattern. If Pattern is Disturbed as it is Reflected Back to its Receiver (low, ultrasonic or microwave range)


Capacitance – Monitor an Electrical Field Around an Object. If Field is Disturbed the Alarm is Triggered. Used for Spot Protection.

Audio Detectors – Monitor for any Abnormal Sound Wave Generation. (Lots of False Alarms)


Types of Locks

Key Locks

Combination Locks

Key Locks

Key-in-Knob or Key-in-Lever (Cylindrical Lockset) – Only for Low Security Apps

Dead Bolt Locks or Tubular Dead Bolts – Good for Storerooms, Houses (Bolt is “Thrown”)

Mortise Locks (Lock Case is Recessed or Mortised into the Edge of Door) – Low Security Apps


Combination Locks

Combinations Must Be Changed at Specific Times and Under Specific Circumstances

Keyless (Cipher) Locks

Push-button locks

Smart Locks

Permit Only Authorized People into Certain Doors at Certain Times




Environmental Security


Part of managing security involves physical security. A subcomponent of physical security involves the management of Environmental Controls.


p<>{color:#000;}. Natural

p<>{color:#000;}. Weather, flooding

p<>{color:#000;}. Not preventable

p<>{color:#000;}. Man Made

p<>{color:#000;}. Vandalism, Theft

p<>{color:#000;}. Terrorist Attacks

p<>{color:#000;}. Disruption and Destruction

p<>{color:#000;}. Noise

p<>{color:#000;}. EMI/RFI

p<>{color:#000;}. Anomalies

p<>{color:#000;}. Brownout, Blackout, Faults

p<>{color:#000;}. Electrostatic Discharge

p<>{color:#000;}. Transients – Spikes Surges

p<>{color:#000;}. Inrush Current

p<>{color:#000;}. Positive outward air pressure flow

p<>{color:#000;}. Generator backup

p<>{color:#000;}. Dedicated Power Circuits


p<>{color:#000;}. Line Conditioning

p<>{color:#000;}. Grounding

p<>{color:#000;}. Cable Shielding

p<>{color:#000;}. Slide 8 Notes

p<>{color:#000;}. 40% to 60% is ideal for datacenters

p<>{color:#000;}. Low humidity = ESD

p<>{color:#000;}. High humidity = corrosion/rust

p<>{color:#000;}. Heat

p<>{color:#000;}. Oxygen

p<>{color:#000;}. Fuel

p<>{color:#000;}. Documentation of Emergency Procedures and Fire Drills

p<>{color:#000;}. Class A – common combustibles

p<>{color:#000;}. Class B – Burnable fuels

p<>{color:#000;}. Class C – Electrical fires

p<>{color:#000;}. Class D – Chemical fires

Class K – Commercial kitchens.



Data Classification


Data classification is the process of organizing data into categories for its most effective and efficient use.  Let’s talk about how to effectively classify data and identify classification driven best practices controls


Data classification is the process of organizing [+data into categories for its most effective and efficient use. +]

A well-planned data classification system makes essential data easy to find and retrieve. This can be of particular importance for risk management, legal discovery, and compliance. Written procedures and guidelines for data classification should define what categories and criteria the organization will use to classify data and specify the roles and responsibilities of employees within the organization regarding data stewardship. Once a data-classification scheme has been created, security standards that specify appropriate handling practices for each category and storage standards that define the data’s lifecycle requirements should be addressed.

To be effective, a classification scheme should be simple enough that all employees can execute it properly.


[A. *] *Restricted Data

Data should be classified as Restricted when the unauthorized disclosure, alteration or destruction of that data could cause a significant level of risk to the University or its affiliates.  Examples of Restricted data include data protected by state or federal privacy regulations and data protected by confidentiality agreements.  The highest level of security controls should be applied to Restricted data.


[B. *] *Private Data

Data should be classified as Private when the unauthorized disclosure, alteration or destruction of that data could result in a moderate level of risk to the University or its affiliates.  By default, all Institutional Data that is not explicitly classified as Restricted or Public data should be treated as Private data.  A reasonable level of security controls should be applied to Private data.

[C. *] *Public Data

Data should be classified as Public when the unauthorized disclosure, alteration or destruction of that data would results in little or no risk to the University and its affiliates.  Examples of Public data include press releases, course information and research publications.  While little or no controls are required to protect the confidentiality of Public data, some level of control is required to prevent unauthorized modification or destruction of Public data.




Operational Security


Once we have a Risk Management program in place we need to implement operational security to manage the day to day aspects of security.


The guiding principles of operational security involve deterring threat actors, detecting threats, correcting them and implementing compensating controls.


Some examples of typical operational security activities include Account Management, SoD controls and SOC’s or Security Operations Centers. A security operations center (SOC) is a centralized unit that deals with security issues on an organizational and technical level.


p<>{color:#000;}. Use strong passwords


p<>{color:#000;}. Update your software


p<>{color:#000;}. Restrict access using firewalls


p<>{color:#000;}. Enable Network Level Authentication


p<>{color:#000;}. Limit users who can log in using Remote Desktop



Operational security involves periodic baselines of normalcy or base secure state. For example, vulnerability scanning of systems is a common operational security practice. Having a baseline for security configuration of an Operating System allows you to measure deviations from the


baseline. For instance, if your scans are picking up open SNMP community strings then you should ensure that your baseline incorporates unique SNMP community strings going forward.



Incident Response


Incident response is an organized approach to addressing and managing the aftermath of a security breach or attack . The goal is to handle the situation in a way that limits damage and reduces recovery time and costs. An incident response plan includes a policy that defines, in specific terms, what constitutes an incident and provides a step-by-step process that should be followed when an incident occurs.



Incident Response Goals

p<>{color:#000;}. Verify that an incident occurred.

p<>{color:#000;}. Maintain or Restore Business Continuity.

p<>{color:#000;}. Reduce the incident impact.

p<>{color:#000;}. Determine how the attack was done if the incident happened.

p<>{color:#000;}. Prevent future attacks or incidents.

p<>{color:#000;}. Improve security and incident response.

p<>{color:#000;}. Prosecute illegal activity.

p<>{color:#000;}. Keep management informed of the situation and response.


Incident Definition

An incident is any one or more of the following:

Loss of information confidentiality (data theft)

Compromise of information integrity (damage to data or unauthorized modification).

p<>{color:#000;}. Theft of physical IT asset including computers, storage devices, printers, etc.

p<>{color:#000;}. Damage to physical IT assets including computers, storage devices, printers, etc.

p<>{color:#000;}. Denial of service.

p<>{color:#000;}. Misuse of services, information, or assets.

p<>{color:#000;}. Infection of systems by unauthorized or hostile software.

p<>{color:#000;}. An attempt at unauthorized access.

p<>{color:#000;}. Unauthorized changes to organizational hardware, software, or configuration.

p<>{color:#000;}. Reports of unusual system behavior.

p<>{color:#000;}. Responses to intrusion detection alarms.



Change Management


All IT environments require changes to be made on a fairly constant basis for the purpose of upkeep and enhancements.



OK, so we’ve conducted our weekly security scans of servers in our local datacenter. The security team has handed over a report showing that six key production servers have key vulnerabilities that need to be patched. The security team says that they are critical


vulnerabilities with known exploits in the wild and they are exploitable via the network. Alright, this sounds pretty bad so we need to implement the patches. So when should we implement these patches? What if something goes wrong when applying the patch? Is there a backout plan? Who made the change? How do we even know a change was made? How will this impact our customer SLA’s?

Enter change management. Change management is effectively a set of processes which introduce visibility and governance into the change management process. Otherwise admins can just arbitrarily apply changes and patches and potentially impact system stability at any given time. Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life



Configuration identification is the process of identifying the attributes that define every aspect of a configuration item. A configuration item is a product (hardware and/or software) that has an end-user purpose. These attributes are recorded in configuration documentation and baselined. Baselining an attribute forces formal configuration change control processes to be effected in the event that these attributes are changed.

The benefits of a CMS/CMDB includes being able to perform functions like root cause analysis, impact analysis, change management, and current state assessment for future state strategy development. There are numerous commercially available tools for Change Management including Remedy and Service Desk among many others.



Disaster Recovery


While at first glance DR might not seem like a natural fit with cybersecurity after further analysis we realize that disasters are threats that can inflict much more damage than any hacker.


The first step in DR planning is to conduct a business impact analysis as it relates to various information systems used in a given company’s environment. Some systems will typically be more critical than others when it comes to restoring acceptable operations after a disaster. Once these systems are identified they should be mapped to critical business processes and the lines of business should sign off on the criticality and impact to operations that would result if a supporting system was offline.




RTO’s and RPO’s are as defined by the business. RTO’s are Recovery Time objectives. Meaning, ‘How long can we tolerate a system being offline before there is a critical impact to our business?” Well the answer here will generally be business process specific and specific to a given set of systems that support those business processes.



RPO’s are Recovery Point Objectives, meaning ‘How much data loss can we withstand in the event of a disaster?’ Initial kneejerk reactions here are often ‘Well, we don’t want any data loss’. This is where financial considerations come into play.


There are multiple types of DR strategies that companies can employ in order to prepare for business continuity processing. Approaches include Cold, Warm and Hot sites. A cold site is a site that has equipment that may be turned off and only activated in an emergency. Backup tapes may be shipped to a cold site ever so often but never restored.

A Hot site is a DR site that is fully functional and ready to operate at a moments notice. Characteristics of hot sites include near real time data replication. Sometimes these types of sites are used for parallel processing.

Warm sites are characterized as having periodic data restoration with HW on premise and active networks. A new trend in DR is for companies to leverage Public cloud providers as Cold, Warm and Hot DR options.


Types of DR testing include read-throughs and walk-throughs. Read-throughs involve teams going over DR recovery scripts together and looking for flaws or missing steps periodically. Walk-throughs involve actually physically testing a DR plan in some capacity.


Technologies such as RAID, clustering and load balancing are typically used within a production site to provide resilience within a given site. Various technologies that can facilitate warm and even Hot DR sites include: Database log shipping and replication. Databases are often the most challenging portion to address in a DR plan as their content updates frequently and the information stored therein is usually business critical. Replicating or shipping the transaction logs in near real time or real time to Hot sites allows companies to meet the most stringent RTO’s.





Computer forensics is a branch of digital forensic science pertaining to evidence found in computers and digital storage media.


The goal of computer forensics is to examine digital media in a forensically sound manner with the aim of identifying, preserving, recovering, analyzing and presenting facts and opinions about the digital information.




Deleted files

A common technique used in computer forensics is the recovery of deleted files. Modern forensic software has their own tools for recovering or carving out deleted data. Most operating systems and file systems do not always erase physical file data, allowing investigators to reconstruct it from the physical disk sectors. File carving involves searching for known file headers within the disk image and reconstructing deleted materials.


Some of the tools needed to extract volatile data, however, require that a computer be in a forensic lab, both to maintain a legitimate chain of evidence, and to facilitate work on the machine. If necessary, law enforcement applies techniques to move a live, running desktop computer. These include a mouse-jiggler, which moves the mouse rapidly in small movements and prevents the computer from going to sleep accidentally. Usually, an uninterruptible power supply (UPS) provides power during transit.

However, one of the easiest ways to capture data is by actually saving the RAM data to disk.


To prove chain of custody, you’ll need a form that details how the evidence was handled every step of the way. This form should answer these five W’s (plus an H):

What is the evidence?

How did you get it?

When was it collected?

Who has handled it?

Why did that person handle it?

Where has it traveled, and where was it ultimately stored?






















Domain 3 – Threats and Vulnerabilities


Malware 101


It’s important to understand that malicious software exists in many forms, spreads through many different methods and is created to achieve many different types of results. There isn’t a single box or set of boxes that matter that we can cleanly place all malware in but there are a few key classification methods that are useful in categorizing what’s out there.


One way of classifying malware is by categorizing it by how it propagates. For instance, viruses typically propagate themselves by pre-pending or post-pending themselves to executables on a system which they reside.


Malware can also be classified by the concealment methods they use. Malware writers are always trying to outwit Anti-Virus software and this involves the continuous manipulation of how malware conceals itself from these scanners.

An Encrypted virus is a type of virus where a component of the virus generates a random encryption key and encrypts the rest of the virus and it stores the key with the virus.


A Stealth virus is a virus type that is especially designed to conceal itself from detection by antivirus software. Here the entire virus is hidden as opposed to just the payload. . It can use techniques such as code mutation, compression, and rootkit techniques to achieve its desired effect.

Polymorphic viruses mutate with every infection, making detection by the “signature” of the virus very difficult.

A metamorphic virus rewrites itself completely at each iteration which makes it more difficult to detect . Metamorphic viruses may change their behavior AND their appearance making it even more difficult for malware scanning tools to detect.



Malware payload references the outcome or it’s intended effect. For instance, if Malware is classified as Ransomware then that essentially describes what the Malware actually does. Ransomware will encrypt a user’s data and require them to pay to decrypt and recover their information. Ransomware has become more popular in recent years. Key loggers are a type of malware that silent captures a user’s keystrokes with the intent to lift sensitive information such as username and passwords and credit card numbers.



Malware can also be classified by the targets they impact. Target classifications include Boot Sector viruses, File infectors and Macro viruses. Macro viruses are platform independent and specific to a given file type. A popular example is a Word or Excel Macro virus


While host based measures represent a fundamental starting point, watching at the perimeter can help to further prevent, detect and/or control outbreaks. Technologies such as WAF’s, NIDS and NIPS



Cyber attacks


A common challenge for developers since the conception of HTTP has been applying session state to a ‘stateless protocol’. Many constructs have been developed to manage state including cookies, Session ID’s, Security Tokens (Such as JWT’s) and manipulation of HTTP Header variables. Attackers can leverage these same constructs to manipulate HTTP sessions in order to exfiltrate, modify or even destroy information.


An HTTP Cookie is a minimal piece of data sent from a website and stored in the user’s browser and is characterized as a client-side session management technique. When a user loads the website, the browser sends back a cookie to the server to inform the server of the user’s prior activity. Session ID’s are managed on the server side although cookies can still be sent to the client as well in tandem. HTTP Headers Are The header fields are transmitted after the request or response line, which is the first line of a message. Security Tokens come in many shapes and sizes. One modern example is a JWT (JSON Web Token) (pronounced JOT). JSON formatted token with one or more encrypted/hashed JSON values. Commonly used in REST based Web Security patterns. Categorically ‘Client Side Attacks’ involve manipulating a user or user’s browser in some form or fashion. This could involve capturing session information and replaying it, luring users to fake malicious websites, exploit vulnerabilities in browsers and injecting unauthorized code.


Session Hijacking is when a user takes over another user’s HTTP session.

An attacker uses whatever measures that the server/client was using to maintain the session. If cookies were used, then an attacker would need to intercept a cookie and could then impersonate the user.

Phishing is when an attacker attempts to lure a user to a website that appears legitimate but is not.


Server Side Mitigation involves ensuring that proper input validation testing and parameter cleansing has been added to any pages of the site that accept inputs.


Client Side Mitigation involves using browser-based scripting whitelist add-ons (such as NoScript for Firefox).



Cross Site Scripting is when an attacker finds an XSS hole in a website. Typically, this occurs when the attacker finds a means of injecting server side code (typically a malicious URL) by posting malicious input parameters into a form on the site which did not perform proper input validation.


Cross Site Request Forgery takes place when a user has already established an authenticated session with a trusted site and subsequently clicks on a link (sent via e-mail or via a stumbled upon website) that redirects the user back to the trusted site they are already authenticated to and POSTS data that causes an unauthorized action to take place on that site on behalf of the victim.



DNS Security


When the internet was originally architected services such as DNS weren’t necessarily designed with security in mind. The focus was more on facilitating usability and connectivity across users and devices and to that end it’s obviously been a wild success



Root Name Servers – Root Name Servers answer queries for top level domain names on the internet such as .com, .net, .org and .edu among many others. The Root Zone for the internet is controlled by the United States Department of Commerce. The Root Zone represents the apex of a hierarchical and distributed database of zones.


Second Level Name Servers – Examples of Second Level Domains include cnn.com, oracle.com and many others. These zones files/servers are typically managed by the companies which have registered and own the domain names. Root servers will refer down to these authoritative name servers.


Caching DNS Servers – Caching only DNS servers are not responsible or authoritative for any domain names. They forward requests to authoritative servers and cache the responses. ISPs typically install and maintain these types of servers in order to minimize name resolution traffic and improve performance.


DNS ResolversDNS Resolvers are client software components that initiate DNS requests as defined in their local TCP/IP configuration files. Resolvers often cache records according to their TTL’s.


RFC 3833 DNSSEC was proposed in order to mitigate DNS Cache Poisoning as well as other DNS attacks by implementing a trusted chain of digital signatures to the extent that the other systems in the ecosystem can support it. Meaning it had to be backwards compatible for systems that didn’t support DNSSEC. This trusted chain of digital signatures works in much the


same way that HTTP over SSL/TLS works to authenticate server identities via digital certificates using public key cryptography.

A group of new record types were created in DNSSEC. Some of which include:

RRSIG – contains the DNSSec signature for a record set. DNS resolvers verify this signature with a public key which is stored in a DNSKEY record.



Social Engineering


Social engineering is arguably the greatest single threat to cybersecurity.


Social engineering, in the context of information security, refers to psychological manipulation of people into performing actions or divulging confidential information. A type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional “con” in that it is often one of many steps in a more complex fraud scheme.

The term “social engineering” as an act of psychological manipulation is also associated with the social sciences, but its usage has caught on among computer and information security professionals.

Techniques used by social engineers include – intimidation, authority, social proof, scarcity, urgency, familiarity and trust


Voice phishing is the criminal practice of using social engineering over the telephone system to gain access to private personal and financial information from the public for the purpose of financial reward. Sometimes referred to as ‘vishing’, the word is a combination of “voice” and phishing.


Spearphishing is similar to phishing with one key difference. Spearphishing is a targeted solicitation to a particular user. A common spearphishing technique is to use LinkedIn to find database administrators and to target these individuals directly with tailored and seemingly legitimate solicitations.


The first step in any security program is awareness. That being said educating your end-users and helpdesk personnel is critical. They should learn what to look for. They should not open unsolicited e-mails that look suspicious. They should never give out their password or personal information over the phone without some sort of agreed upon mutual verification process.


It’s also a recommended practice that security teams conduct internal phishing expeditions to test the awareness of internal users to attack and to further drive awareness. This kind of testing can also be used to refine awareness programs and verification processes.

Helpdesks should undergo thorough training in social engineering awareness as they can often be the target of social engineering attacks.



Aside from awareness technologies can be employed to help minimize the frequency and occurrence of social engineering attacks. Tools such as SPAM filters and DKIM for e-mail can help filter many phishing solicitations.


Wireless Attacks


802.11 forms the foundation for modern wireless protocols. 802.11 represents a standard for radio waves to communicate via 802.11 conformed frames. These frames are then transformed into Ethernet frames with embedded IP payloads. There are several 802.11 frame types to be familiar with which include Beacon Frames, Authentication Frames and De-Authentication Frames.

Beacon Frames announce the presence of a wireless access point. When you turn on the wireless on your smart phone or laptop and see the advertised wireless networks it is a function of the beacon frames those access points are sending out. A best practice is to turn off the beacons as an added security measure.

Authentication Frames are used to present a client’s identity to an access point. A de-authentication frame is used to tear down the communication between a client and a server.


WEP (Wireless Encryption Protocol) was the earliest form of wireless encryption and is not considered to be very secure. It can be easily compromised using well known techniques and has relatively weak encryption strength. In WEP a stream cipher is generated using a pseudo random generated number (IV – initialization vector) in combination with a shared secret key. This stream cipher is referred to as the key stream.


New key generated for each packet as opposed to new IV for each packet like WEP

TKIP (Temporal Key Integrity Protocol) – RC4 cipher used with 128-bit per packet key.

Message Integrity check replaces CRC checksum that was used in WEP

WPA Modes

Personal (Pre-shared key)

WPA-Enterprise – Uses enterprise 802.1x RADIUS based authentication

Wi-Fi Protected Setup – WPS buttons on wireless routers to ease setup. Introduces vulnerabilities.


Wireless Attack types

p<>{color:#000;}. Aircrack-ng –

p<>{color:#000;}. Password cracking – dictionary attacks

p<>{color:#000;}. WEP, WPA/WPA2 – brute force – John the Ripper

p<>{color:#000;}. Packet spoofing

p<>{color:#000;}. Reaver

p<>{color:#000;}. WPS Attacks on WPA/WPA2




Advanced Wireless Attacks


Evil twin Router is a term for a rogue Wi-Fi access point that appears to be a legitimate one offered on the premises, but actually has been set up to eavesdrop on wireless communications.


Ad hocs

 Public hotspots use infrastructure APs to connect many users to the Internet. The alternative – ad hoc mode – connects peers directly to each other, such as to share a printer.





Evil twins can wait passively for users to take the bait. But real hackers would probably use free tools like aireplay to speed things up by disconnecting all users, hoping some will reconnect to the evil twin.


Warchalking is the drawing of symbols in public places to advertise an open Wi-Fi network.


APs with factory-default omni antennas cover an area that’s roughly circular, impacted by RF obstacles like walls. It is therefore common to place APs in central locations, or divide an office into quadrants, deploying one AP per cell.



Replacing your 802.11a/b/g AP’s “rubber ducky” omni antennas with directional antennas can better focus radiated power where it belongs, improving inside coverage and reducing outside signal leakage.


The EAP, LEAP, PEAP, EAP-TLS, and EAP-TTLS protocols were developed in order to help provide additional security for the transmission or transport of authenticating information over a network. The 802.1X standard includes several of these protocols and is able to provide a network administrator with both a stronger authentication methodology than WEP provides as well as a means to both derive and distribute stronger keys to network clients to further improve the strength of WEP.

MAC filtering allows an admin to only allow endpoints to connect with authorized MAC addresses.







Cross-Site Scripting Attacks


Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications. XSS enables attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy.



Buffer Overflows


Here we will talk about attacking applications using buffer overflow techniques in order to execute arbitrary malicious code and we will also identify ways to mitigate these attacks.


Memory is allocated by operating systems in order to store data and execute compiled code in an orderly manner leveraging available memory resources.

Memory is divided up into several areas.

First we have Text which is execution code

Next we have Data which consists of uninitialized and initialized constants. BSS is global and static variables


p<>{color:#000;}. Heap – dynamically allocated memory that contains variables that can be accessed across various routines

p<>{color:#000;}. Stack – Usually consists of local variables. A continuous set of memory.

p<>{color:#000;}. Pushing and Popping off the stack = There are two types of operations on the stack. Pushing and Popping.

p<>{color:#000;}. Pushing – pushes contents of memory into CPU registers

p<>{color:#000;}. Popping – pulls data out of CPU registers for storage in memory.


The stack is divided up into units called stack frames. Each stack frame contains all data specific to a particular call to a particular function. This data typically includes the function’s parameters, the complete set of local variables within that function, and linkage information—that is, the address of the function call itself, where execution continues when the function returns). Depending on compiler flags, it may also contain the address of the top of the next stack frame. The exact content and order of data on the stack depends on the operating system and CPU architecture.



Although most programming languages check input against storage to prevent buffer overflows and underflows, C, Objective-C, and C++ do not.


ASLRAddress space layout randomization is a computer security technique involved in protection from buffer overflow attacks.



DEP – Data Execution Prevention – Data Execution Prevention keeps tabs on all the programs and system services (e.g. device drivers) to monitor how they use the system memory, as well as the data stored in it.


NOP sleddingNOP stand for NO-Operation instructions. These instructions are sent to the CPU in order to ‘slide’ the CPU’s instruction execution across memory.


Return-to-libc – is a computer security attack usually starting with a buffer overflow in which a subroutine return address on a call stack is replaced by an address of a subroutine that is already present in the process’ executable memory, bypassing the NX bit feature


Security Testing Tools


There are practically an infinite number of security testing tools available both free and paid. In this lesson we will begin to scratch the surface of some of these common tools and identify how we categorize them and their uses.



Testing tools are generally categorized as:

p<>{color:#000;}. Network/Wi-Fi, Cracking/Encryption, Apps,

p<>{color:#000;}. Database, Vulnerability Scanners, Exploitation

p<>{color:#000;}. And Forensics.

p<>{color:#000;}. Network/Wi-Fi tools are usually used for port and vulnerability scanning, packet capture and packet manipulation.

p<>{color:#000;}. Cracking tools are used to audit encryption strength and password strength by way of brute force and dictionary attacks.

p<>{color:#000;}. App Security Tools operate at the application layer and can use a number of known exploits to compromise applications and data.

p<>{color:#000;}. Database Security Testing tools leverage techniques such as SQL Injection to harvest and manipulate data from database stores.

p<>{color:#000;}. Vulnerability Scanners are used to identified unpatched systems and look for weak system configurations.

p<>{color:#000;}. Exploitation Tools are used to design and exploit actual exploit payloads against target systems.

p<>{color:#000;}. Lastly, Forensics tools are used to capture legal evidence from hard disks and their associated file-systems and databases.



Nmap is a basic tool that should be in any security professionals bag. Nmap is short for network mapper. Nmap is a free and open source tool that can be used for auditing and network discovery. Nmap can be used to conduct both CONNECT and SYN scans.



Application security tools tend to focus on the contents of traffic at Layer 7 of the OSI model. The actual application content.


SQL Injection is when attackers manipulate HTTP Form input such that it is not correctly validated and causes the underlying database server to respond in a way that was not authorized.


Forensic tools are used to capture evidence typically for legal proceedings and to ensure a chain of custody of information for formal investigations.



Security Information and Event Management (SIEM)


Management of logs is a key component of operational security. These days the velocity, variety and volume of data collected via logs has catapulted log management into the realm of Big Data.


Log management (LM) comprises an approach to dealing with large volumes of computer-generated log messages (also known as audit records, audit trails, event-logs, etc.). LM covers:[1]

log collection

centralized aggregation

long-term retention

log rotation

log analysis (in real-time and in bulk after storage)

log search and reporting.


Elements of the Log

Such logs shall identify or contain at least the following elements, directly or indirectly. In this context, the term “indirectly” means unambiguously inferred.

Type of action – examples include authorize, create, read, update, delete, and accept network connection.

Subsystem performing the action – examples include process or transaction name, process or transaction identifier.

Identifiers (as many as available) for the subject requesting the action – examples include user name, computer name, IP address, and MAC address. Note that such identifiers should be standardized in order to facilitate log correlation.






Security information and event management (SIEM)


Software products and services combine security information management (SIM) and security event management (SEM), and provide real-time analysis of security alerts generated by network hardware and applications.



p<>{color:#000;}. Search – By date and time, by event type, by criticality, by account/user ID, by department

p<>{color:#000;}. Sorting – By date and time, by event type, by criticality, by account/user ID, by department



Platform Hardening and Baselining


Minimizing the attack surface area of operating systems, databases and applications is a key tenet of operational security.


In configuration management, a “baseline” is an agreed description of the attributes of a product, at a point in time, which serves as a basis for defining change.[1] A “change” is a movement from this baseline state to a next state. The identification of significant changes from the baseline state is the central purpose of baseline identification.[2]



So having a checklist and implementing it is one thing, but we need to periodically validate the OS against the baseline because things change in computing environments. Sometimes the application of a patch may open up or enable a service for instance. Vulnerability scanning is a great way to automate some of this periodic testing.



As we learned in this section, minimizing the attack surface area of operating systems, databases and applications is a key tenet of operational security.


Honeypots and Honeynets


Luring attackers away from critical data and studying their behavior can help us to protect the data that matters most.


A honeynet is a network set up with intentional vulnerabilities. Its purpose is to invite attack, so that the attacker’s activities and behaviors can be studied and that information used to increase network security. Honeynets consist of one or more honey pots.



Production honeypots are easy to use, capture only limited information, and are used primarily by companies or corporations.


Research honeypots are run to gather information about the motives and tactics of the Black hat community targeting different networks.


Pure honeypots are full-fledged production systems



High-interaction honeypots imitate the activities of the production systems that host a variety of services and, therefore, an attacker may be allowed a lot of services to waste his time


Low-interaction honeypots simulate only the services frequently requested by attacker


Honeypots can be placed outside the network entirely in front of the outer firewall. The pro’s of this approach are that it doesn’t use FW and IDS resources and doesn’t compromise DMZ or internal network. A drawback to this type of placement is that it doesn’t catch internal attackers. Honeypots can also be placed in the DMZ and/or in the internal network. The pros are that you can potentially lure and catch external and internal hackers but the CON is that you may be creating a launch point for attackers to compromise your DMZ and internally trusted network.



Vulnerability Scanning and Pen Testing


Vulnerability Assessment and Pen Testing are often terms that are used interchangeably. In this section we will walk through some of the differences and commonalities between the two.



Log Reviews – Involves having a human interpret and sometimes correlate log data to identify unauthorized activities.

Synthetic transactions – when you perform an automated test transaction against a working system to monitor it’s behavior in real-time.

Code Review can be manual or dynamic and can include techniques such as fuzzing which is a Black Box software testing technique, which basically consists in finding implementation bugs using malformed/semi-malformed data injection in an automated fashion

Where threat modeling works more on the architecture level, misuse cases target individual functionalities and detail the individual threats to that functionality







Security Content Automation Protocol (SCAP)


This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics.

SCAP The Common Vulnerabilities and Exposures (CVE) system provides a reference-method for publicly known information-security vulnerabilities and exposures


In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This method usually assumes that the testers have full access to the source code. Static application security testing (SAST) is a set of technologies designed to analyze application source code, byte code and binaries for coding and design conditions that are indicative of security vulnerabilities.


Dynamic application security testing (DAST) technologies are designed to detect conditions indicative of a security vulnerability in an application in its running state.


Threat Modeling


First let’s go over some key terminology:

A threat is a potential for violation of security, which exists when there is a circumstance, capability, action, or event that could breach security and cause harm.

A threat vector – The method a threat uses to get to the target.s involves understanding various threat vectors and modeling threats.

A threat model is used to describe a given threat and the harm it could to do a system if it has a vulnerability.

A threat assessment is the identification of types of threats that an organization might be exposed to.



When planning for threats it’s important to research common threats insomuch that if they occur commonly in other environments than it is likely to happen in your environment as well.

Common threats to pay particular attention to include:


Social engineering


Insider Attacks


Identifying threat actors all are also an important part of the Threat identification process.





Cloud Computing: Attacks

When it comes to Cloud Security, unfortunately vulnerabilities have been found in the Cloud environment which leads to attacks. Following are some of the well-known attack in the cloud environment.


Denial of Service Attacks (DoS Attacks): DoS attack definition remains same in the Cloud i.e. it prevents users from accessing a service. However, in a Cloud environment, DoS attacks get nasty. Cloud by its design will keep on adding more computational power thus making the attack even stronger. The Cloud model gives the DoS attack even more computational power. This problem is further aggravated when DDoS comes into picture as more machines will be compromised to attack large number of systems.


Malware Injection Attack: This attack focuses on adding/injecting a service implementation or evil virtual machine to cloud environment















Domain 4 – Applications, Data and Host Security


Application Design Flaws and Bugs


Design Flaws are fundamentally different from bugs. Bugs are very local in nature and usually reflect a flaw in implementation. They might be subject to SQL Injection, Cross Site Scripting or other vulnerabilities. These can typically be addressed through a simple patch and do not require a change to the requirements.



A design flaw however represents a miscalculation in requirements, missing requirements or flawed design. These types of issues are more fundamental and generally cannot be addressed with a simple patch.


Circumventing navigation of a website is a simple example. For instance, if http://site.com uses http://site.com/initstep=1 before an authentication event takes place, then maybe we can deduce that by changing the value to http://site.com/initstep=2 will move us into the next state of the transaction which might actually be a post-authentication step.

Another typical design flaw is when admin privileges are not well protected. Maybe the default username and password is still in place. A real-world example of this is illustrated with many home routers and access points. Many of these devices ship with default usernames and passwords. An attacker could then login to the router and assume administrative privileges.

Abuse of password recovery features is also a common attack vector which stems from flawed design. For instance, if the password recovery questions have to do with what high school I went to then that is something that an attacker could easily research on Facebook. This is flawed design.



Cloud Security


Pure Play Cloud Security Services are agnostic third party security services meant to be consumed independently.

Embedded Cloud Security Services are services that are sidecars to existing vendor offerings such as SaaS, PaaS and IaaS environments.

Embedded Cloud Security Services examples include Salesforce.com has services available specific to securing their Salesforce application in particular but not necessarily for use with

third parties. Amazon Web Services has an Identity Management service which is exclusively used for Identity Management activities specific to Amazon’s AWS cloud.

Pure Play Cloud Security Services examples include vendors such as Ping and Okta which provide Authentication and Authorization as a Service independently.



Pure Play Cloud Security Service Categories include:

p<>{color:#000;}. Authentication/Authorization as a ServiceSSO, Multifactor, Context based authentication

p<>{color:#000;}. Auditing as a Service – Log aggregation and correlation services which pull in on-prem and aggregate cloud services logs.

p<>{color:#000;}. User Provisioning as a Service – On-prem and cloud user provisioning services

p<>{color:#000;}. Threat Intelligence Services – Intel sharing services

p<>{color:#000;}. Cloud Web Services Security Brokers – which broker and manage REST/SOAP API security through encapsulation and security rules.

p<>{color:#000;}. Data Loss Prevention Services – which help to prevent egress of sensitive data through various channels.

p<>{color:#000;}. Web Security/E-mail Security – Web proxy services and e-mail security filtering services hosted in the cloud

p<>{color:#000;}. Pen Testing/Security Testing – External pen testing services from the cloud.

p<>{color:#000;}. Network Security ServicesIDS/IPS – Can be on prem and managed remotely as a service.

p<>{color:#000;}. Encryption Services – include Third party encryption key management services

p<>{color:#000;}. Mobile Device Management – Data wipes, policy enforcement, auditing.



Web Attacks


There are so many different types of attacks sometimes it can be challenging to address them all within the context of our various lessons. So in this section I’ve pulled together some attack types that haven’t necessarily been covered in the other sections.

Let’s start with Shoulder surfing. In computer security, shoulder surfing refers to using direct observation techniques, such as looking over someone’s shoulder, to get information


Dumpster diving is looking for treasure in someone else’s trash. (A dumpster is a large trash container.)


A zero-day (also known as zero-hour or 0-day) vulnerability is a previously undisclosed computer-software vulnerability that hackers can exploit to adversely affect computer programs, data, additional computers or a network.


In cryptography and computer security, a man-in-the-middle attack (often abbreviated to MITM, MitM, MIM or MiM attack or MITMA) is an attack where the attacker secretly relays

and possibly alters the communication between two parties who believe they are directly communicating with each other.


SPIMMessaging spam, sometimes called SPIM.


Christmas tree packet is a packet with every single option set for whatever protocol is in use.


Pharming is a cyberattack intended to redirect a website’s traffic to another, fake site. Pharming can be conducted either by changing the hosts file on a victim’s computer or by exploitation of a vulnerability in DNS server software.


An integer overflow occurs when an arithmetic operation attempts to create a numeric value that is too large to be represented within the available storage space.


Non-standard system attacks can include attacks on game-consoles, in-vehicle systems and even mainframes.



Big Data Security


Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Big data is defined by significant data volume, variety and volume. With the aggregation of these large data sets comes new security challenges.


p<>{color:#000;}. Enable Access Control and Enforce Authentication

p<>{color:#000;}. Configure Role-Based Access Control

p<>{color:#000;}. Encrypt Communication

p<>{color:#000;}. Limit Network Exposure

p<>{color:#000;}. Audit System Activity

p<>{color:#000;}. Encrypt and Protect Data

p<>{color:#000;}. Run NoSQL with a Dedicated User

p<>{color:#000;}. Run NoSQL with Secure Configuration Options

p<>{color:#000;}. Request a Security Technical Implementation Guide (where applicable)

p<>{color:#000;}. Consider Security Standards Compliance


Hadoop Threat Model

p<>{color:#000;}. Unauthorized data access (protected health information access)

p<>{color:#000;}. Unauthorized data change

p<>{color:#000;}. Unauthorized job submission, delete or change

p<>{color:#000;}. Task may access other tasks or access local data

p<>{color:#000;}. Rogue DataNode, NameNode or Job Tracker

p<>{color:#000;}. User spoofing to submit workflow as another user



Securing Hadoop involves configuring:

p<>{color:#000;}. Authentication

Limit user access to function

Limit user access to objects

Manage delegation of access

p<>{color:#000;}. Authorization

Limit user access to function

Limit user access to objects

Manage delegation of access

p<>{color:#000;}. Auditing

Failed/Successful Authn.

System changes

p<>{color:#000;}. Data Protection

Encryption at rest;

Volume, file
Encryption in transit: HTTPS

Mobile Security and Device Management


In order to effectively implement these controls at corporate scale a new class of technologies has emerged which are called Mobile Device Management solutions. Mobile device management (MDM) is an industry term for the administration of mobile devices, such as smartphones, tablet computers, laptops and desktop computers. MDM is usually implemented with the use of a third party product that has management features for particular vendors of mobile devices.



Mobile application management (MAM) describes software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings on both company-provided and “bring your own” smartphones and tablet computers.


One of the biggest knocks on Android security is the lack of controls in the various android app marketplaces. There isn’t a clear set of vetting and governance that takes place in these environments in many cases and this leads to the propagation of mobile malware which has become rampant.


So IOS security has a much stronger arguably then Android. This primarily has to do with the command and control Apple has in vetting apps in their App Store marketplace as well as the tight control of hardware and closed source IOS software. There have been a few notable exceptions. In particular, a hacked copy of Xcode called Xcode Ghost was released to some unsuspecting developers in China and other parts of Asia. Xcode is the development environment for iOS applications. When using the hacked copy of Xcode any apps they

developed had backdoors and the like inserted into their apps before submission to the app store for vetting. Apple unfortunately didn’t catch some of these apps and it caused users that downloaded apps developed with Xcode to be potentially compromised.

Xcode Ghost is an example of compiler malware. Instead of trying to create a malicious app and get it approved in the App Store, Xcode Ghost’s creator(s) targeted Apple’s legitimate iOS/OSX app development tool called Xcode to distribute the malicious code in legitimate apps.

Xcode Ghost’s creators repackaged Xcode installers with the malicious code and published links to the installer on many popular forums for iOS/OS X developers. Developers were enticed into downloading this tampered version of Xcode because it would download much faster in China than the official version of Xcode from Apple’s Mac App Store.


Key Management


Any encryption is only as good as the protection of its keys.


The scope of keys that need to be managed include keys for cloud access such as SSH keys for Amazon Machine Images, Database encryption keys, file encryption keys, application encryption keys and digital certificates. There are even keys that encrypt other keys that need to be managed!



The Key Management Interoperability Protocol (KMIP) is a communication protocol that defines message formats for the manipulation of cryptographic keys on a key management server. Keys may be created on a server and then retrieved, possibly wrapped by other keys. Both symmetric and asymmetric keys are supported, including the ability to sign certificates. KMIP also defines messages that can be used to perform cryptographic operation on a server such as encrypt and decrypt.


Separation of Duties – This is widely known control set in place to prevent fraud and other mishandling of information. Separation of duties means that different people control different procedures so that no one person controls multiple procedures. When it comes to encryption key management, the person the person who manages encryption keys should not be the same person who has access to the encrypted data.

Dual Control – Dual control means that at least two or more people control a single process. In encryption key management, this means at least two people should be needed to authenticate the access of an encryption key, so that no one single person has access to an encryption key

Split Knowledge – Split knowledge prevents any one person from knowing the complete value of an encryption key or passcode. Two or more people should know parts of the value, and all must be present to create or re-create the encryption key or passcode.


Key escrow (also known as a “fair” cryptosystem) is an arrangement in which the keys needed to decrypt encrypted data are held in escrow so that, under certain circumstances, an authorized third party may gain access to those keys. These third parties may include


businesses, who may want access to employees’ private communications, or governments, who may wish to be able to view the contents of encrypted communications.

Out of band and in-band key exchange.



Virtualization and Cloud Security


Virtualization technologies add another layer of security considerations when protecting IT environments.


So virtualization can mean many things at different layers of the stack. At the network layer you have VLAN’s, MPLS networks and even SDN (Software Defined Network) technologies such as Openflow. At the storage layer you have VSAN’s. At the Hardware and OS layer you have hypervisors for machine virtualization and containers for runtime virtualization and isolation. Databases have even gotten in on the act using container technology.


Kernel-based Virtual Machine (KVM) is a virtualization infrastructure for the Linux kernel that turns it into a hypervisor.


Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux


Kubernetes is an open source container cluster manager by Google. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.


OpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network.

OpenFlow enables network controllers to determine the path of network packets across a network of switches. The controllers are distinct from the switches. This separation of the control from the forwarding allows for more sophisticated traffic management than is feasible using access control lists (ACLs) and routing protocols. Also, OpenFlow allows switches from different vendors — often each with their own proprietary interfaces and scripting languages — to be managed remotely using a single, open protocol. The protocol’s inventors consider OpenFlow an enabler of software defined networking (SDN).



Data leakage protection and usage monitoring

When you’re deploying IaaS in a public cloud, it’s important to know who is accessing the information, how the information was accessed (from what type of device), the location from which it was accessed (source IP address), and what happened to that information after it was accessed (was it forwarded to another user or copied to another site)?



SaaS Security Best Practices

Analyzing the OWAS Application top 10 is a solid place to start when planning for SaaS security.


Securing Storage and Storage Platforms


Storage Security has more moving parts than ever, especially considering the convergence of IP and Storage Networking technologies.


Ephemeral storage is temporary in nature. It typically only lasts for the life of a given OS or VM instance.


Raw disk refers to unformatted disk. In computing, the term raw disk is used to refer to hard disk access at a raw, binary level, beneath the file system level, and using partition data at the MBR.



Technologies such as iSCSI, FCIP, and FCOE and NAS encapsulate Storage communication protocols such as SCSI, Fibre Channel, SMB and CIFS blocks in Ethernet frames and in some cases over IP packets. Traditional SAN technologies where not internet accessible because they were not IP or Ethernet enabled. For Data Center and Storage architects there are distinct advantages in converging over Ethernet and IP. This reduces administrative complexity and cost and allows for common networking and fabric technologies to be leveraged and provides more flexibility in deployment. As with most things, ease of access and use usually has the potential to compromise security and converged network storage is no exception.

Authentication, Authorization, Access Control and Confidential must be looked at closely when considering converged Storage Networks. End to End switch authentication in Fibre Channel and iSCSI networks should be incorporated into production deployments. From an authorization perspective storage ports can perform device masking and allocate storage access based on authenticated initiators.



From a Storage Access Control Perspective Zoning in Fibre-Channel limits the scope of discovery based on the end node identity.


Encryption can take place at many levels within a computing infrastructure. There are pros and cons to consider with each approach. Some application developers choose to encrypt at the application level. Meaning that they will use code in the application to encrypt and decrypt information. A pro with this type of approach is that it provides extreme flexibility in what and how the developer wants to encrypt the data which is also by the same token the key drawback to this approach. When encrypting at the application layer there tends to be a lack of standardization in approach and best practices are not always followed when it comes to encryption algorithms, methods, data classification and key management. Data can also be encrypted at the Database level. Using this approach the database kernel manages the

encryption process in a way that is transparent to the application that is interacting with it. Benefits to this approach are that data encryption can be applied at the column or tablespace level and it also tends to be fairly performant, especially with databases that can offload encryption activities to underlying hardware.


Information Lifecycle Management


Information Lifecycle Management (sometimes abbreviated ILM) refers to a wide-ranging set of strategies for administering storage systems on computing devices.


Information lifecycle management often refers to the management of unstructured data but can also include database content as well. Unstructured content includes files such as Microsoft Office documents, spreadsheets or any other file based content. E-mail factors in as part of the ecosystem as it relates to the content of the email and the attachments therein. Many companies use Content Management Systems such as Microsoft Sharepoint or other similar technologies to better manage and classify raw file content.



ILM includes every phase of a “record” from its beginning to its end. And while it is generally applied to information that rises to the classic definition of a record (Records management), it applies to any and all informational assets. During its existence, information can become a record by being identified as documenting a business transaction or as satisfying a business need. In this sense ILM has been part of the overall approach of ECM Enterprise content management.



Electronic discovery (or e-discovery or ediscovery) refers to discovery in litigation or government investigations which deals with the exchange of information in electronic format


Data remanence is the residual representation of digital data that remains even after attempts have been made to remove or erase the data. To address this many companies are looking to cryptographic erasure as a solution for effectively ‘deleting’ data.



Hacking IoT


The Internet of Things represents a greater attack landscape for hackers.

There are various types of ATM attacks. One is called RAM Raiding. You’ve probably seen clips of this on the local news where attacker will ram their vehicles into an ATM to break it open and carry off the exposed cash. Another is through the use of a ‘Lebanese Loop’. It is a device that the attacker slides into an ATM reader and it will snare the card of the next ATM user so that the card is not returned. After the frustrated ATM user leaves the attacker comes back and collects the card. Skimmers are also another well documented attack vector.


Skimmers read and store magnetic stripe information when a card is swiped. Attackers can take this information and recreate fake ATM cards from scratch.

To counter a RAM Raiding attack many banks are putting physical security barriers in place and incorporating the use of Dye packets. The barriers can help prevent the machine from being physically compromised and the Dye Packet serves as a defense in depth measure in that if the machine is compromised then the cash will be deemed useless after it is sprayed with the Dye packets.


NFC stands for Near Field Communications. NFC technology consists of a chip and an antenna and is now prevalent in many smartphones. Contactless payment via Apple Pay is commonly available at retail terminals, taxis, parking meters and via many other devices. Some consider NFC to be secure because of it’s very short range which is measured in centimeters. Also, NFC chips in most founds transition into standby mode when the phone is not in active use which prevents casual communication and data leakage.

In addition to communicating with other active NFC enabled devices, NFC enabled phones can communicate with NFC Smart Tags. NFC Smart Tags are small passive circuits that can be printed and attached to flyers and promotionals fairly cost effectively. They have a bit of storage memory along with a radio chip attached to an antenna. They are passive in that they literally draw power from the sources that read them through magnetic induction.


Anyone can buy blank NFC tags and write custom data to them. Meaning that if I scan or bump my phone against a smart tag that’s on a movie poster thinking it might forward me a link to the trailer, it might actually be forwarding me to a malicious site where I will automatically be downloading malware to my mobile device. Now those that create and distribute Smart Tags with legitimate information can help to protect the content of those tags through encryption key and locking mechanisms.

There are several well-known tools that can be used to conduct Bluetooth security tests and hacks. Some of these include BT Scanner, Bluesnarfer, Blue Diving, BT Crack and Blooover.

BT Scanner can capture BT info without pairing. Bluesnarfer can download phonebooks of those vulnerable to Bluesnarfing attacks. The Blue Diving tool is a full Bluetooth Penetration Testing Suite. BTCrack is a Bluetooth PIN and Link key cracker and Blooover performs Bluebug attacks which can lift can compromise phonebook info, read and send SMS text messages and initiate phone calls.


Securing the Power Grid


SCADA (supervisory control and data acquisition) is a system for remote monitoring and control that operates with coded signals over communication channels (using typically one communication channel per remote station).







Terms like smart grid and cybersecurity are getting a lot of attention these days. At their intersection is a body you may not have heard much about: the North American Electric Reliability Corporation, or NERC.


NERC requires that major power operators in North America adhere to NERC’s CIP (Critical Infrastructure Protection) standards for compliance.

Some high level best practices involved with a power operator becoming CIP compliant can include but are not limited to:

1. Identify all the connections to the SCADA networks.

2. Disconnect all unnecessary connections t SCADA network

3. Evaluate and strengthen security of any connections to SCADA network

4. Harden all SCADA networks by disabling or removing unnecessary services

5. Never rely on the proprietary protocols when protecting your system

6. Implement the security types provided by the device and the system vendors

7. Establish a strong controls over the medium which you use as a backdoor into SCADA network

8. Implement both external and internal intrusion detection systems at the same time establish 24-hour-a-day

9. an incident monitoring

10. Perform all technical audits of the networks and SCADA devices.

11. Networks, to determine security concerns

12. Conduct a physical security surveys while assessing all remote sites. –



















Domain 5 – Access Control and Identity Management


Access Control Models


There are several core Access Control Models that serve as standards for privilege management.


For our purposes we will discuss access controls within the context of file based access, although access to other resource types are equally valid. These various models take approaches that are either subject or object oriented in nature and differ in where the access control data is stored, how it is managed and how it is enforced. These models include: Access Control Matrices, Access Control Lists, Capabilities Models and RBAC models.


An Access Control Matrix is a subject and object matrix model. In implementation it is essentially one large table where subjects and resources intersect and access privileges are defined. Access Control Lists come in two flavors, Mandatory and Discretionary. Mandatory Access Control lists are label based in nature. Specifically, each piece of data (in this case each file) must be labeled with a classification.


Capabilities models are different in that they offer a subject based approach. Using the file access example rather than having permissions definitions located at the file level, they would be located as part of the subject or user record.


With RBAC (Role Based Access Controls), subjects can be mapped to roles and roles can be mapped to resources. This layer of abstraction greatly reduces the number of direct subject and object relationships and provides administrators with a mechanism to grant and delegate access in a more efficient manner. Role Based Access Controls are very commonly implemented in modern operating systems, applications and networks.





Trust and Security Models


Trust and Security Models are frameworks that help us to understand and incorporate best practices when designing and implementing secure systems.

There are many Security Models in existence but here we will focus on five key models:

p<>{color:#000;}. Bell-LaPadula model A model that protects the confidentiality of the information within a system.

Simple security rule A subject cannot read data at a higher security level (no read up).

[-property rule*] A subject cannot write data to an object at a lower security level (no write down).

Strong star property rule A subject that has read and write capabilities can only perform those functions at the same security level.

p<>{color:#000;}. Biba model A model that protects the integrity of the information within a system.

Simple integrity axiom A subject cannot read data at a lower integrity level (no read down). [-integrity axiom*] A subject cannot modify an object in a higher integrity level (no write up).

3. Clark-Wilson model An integrity model implemented to protect
the integrity of data and to ensure that properly formatted transactions take place.

Subjects can only access objects through authorized programs (access triple).

Separation of duties is enforced.

Auditing is required.


Noninterference model Commands and activities performed at one security level should not be seen or affect subjects or objects at a different security level.

Brewer and Nash model A model that allows for dynamically changing access controls that protect against conflicts of interest


The basic types of CA trust hierarchies include:

Rooted trust model. In a rooted trust model, a CA is either a root or a subordinate, and you can use offline root CAs for the highest level of security.

Network (or cross-certification) trust model. In a network trust model, every CA is both a root and a subordinate.

Hybrid trust model. Hybrid trust models combine elements of both the rooted and network trust models.




In this section we will cover the Kerberos Authentication protocol, why it’s used and how it’s implemented to secure client server computing environments.

p<>{color:#000;}. Foundation for Microsoft’s Active Directory Services.

p<>{color:#000;}. Unix systems leverage Kerberos for client server authentication.

p<>{color:#000;}. Typically intranet based on not used on the broader internet. Fundamentally a client server based technology.

The KDC stores all of the secret keys for user machines and services in its database.

Secret keys are passwords plus a salt that are hashed


NTLMv2 authentication – “Implementers should be aware that NTLM does not support any recent cryptographic methods, such as AES or SHA-256. It uses cyclic redundancy check (CRC) or message digest algorithms (RFC1321) for integrity, and it uses RC4 for encryption.

Deriving a key from a password is as specified in RFC1320 and FIPS46-2. Therefore, applications are generally advised not to use NTLM.”


A symmetric key based distribution system

Master keys

Session Keys


Single-Sign On


With the advent of cloud and mobile computing user experience and security is top of mind for many organizations.


First of all,,we need to define what we mean by SSO. In its purest from SSO stands for Single-Sign-On. Meaning that one login provides you with access to multiple resources without signing in again. There are however some variations on the concept. For instance, Same Sign-On or Consistent Sign-On is where a user may enter the same user and password to access multiple applications or resources but the user is challenged to authentication when they switch between applications


There are numerous types of SSO technologies. The scope of the solution typically defines the technologies involved. As computing has evolved so have their associated mechanisms for authentication. In the distributed computing/client server era, SSO primarily comprised of technologies such as Kerberos and for applications that were not Kerberos-capable password synchronization was employed.


A session is an interaction with a system or application that has a defined lifetime which consists of a beginning, potential continuance and an end. SSO systems typically utilize mechanisms such as cookies and session ID’s to track the uniqueness of a session.


Outside of passwords there are numerous authentication mechanisms some of which include biometrics, smart cards and SMS one-time passwords.


Put simply the Time-based One-time Password Algorithm (TOTP) is an algorithm that computes a one-time password from a shared secret key and the current time.


While SSO is largely focuses on authentication and session management it does play a role in the authorization process. Generally, authorization provided by SSO frameworks is what is


referred to as ‘coarse grained authorization’. This includes authorization for instance to a given set of URL’s but not necessarily authorization down at the application function level.

There are externalized fine grained authorization technologies in existence which include XACML (The Extensible Access Control Markup Language) and Oauth.


Identity Federation


The purpose of this lesson is to define Identity Federation, identify why it’s needed and how it works.


p<>{color:#000;}. Why? – Companies needed mechanism for SSO across trust boundaries

p<>{color:#000;}. Trust boundary considered to be a realm of managed identities

p<>{color:#000;}. Keeps password management centralized to the Identity Provider.


In Federation Terminology the IdP is the Identity Provider. This means that this is the party that manages the end users identity and password information and is authoritative for the authentication process for that user.


The SP is the Service Provider. The Service Provider serves up the application that the end user wants to access.


SAML stands for Security Assertion Markup Language. SAML is the protocol standard used to send and receive user assertions using digital certificates to secure them. SAML is an XML based standard that uses HTTP over TLS as it’s transport mechanism.



Identity Federation allows corporations to enable SSO across traditional trust boundaries. The Identity Provider remains responsible for management of the end user’s password and the service provider does not have to take on this responsibility. The alternative would be for the user to have a separate account provisioned that would require a separate password and represent another step in the de-provisioning process once it’s time to revoke the user’s access. It improves user experience, manageability and security.


Identity Governance


Identity governance encompasses all aspects of the identity and role management lifecycle. In this section we will learn all about identity and role lifecycle management best practices.


Identity governance encompasses traditional Identity and access management, role management and entitlement management at the resource level. This includes activities such as initial user provisioning, role changes and de-provisioning of roles, resources and accounts. It also includes periodic review and attestation of access. Identity governance answers the question ‘Who had access to what? When did they have that access? Who granted them that


access? Was that access approved? When was the last time their access was reviewed to identify if they still needed to have that access?

These are all common place questions from auditors. The challenge is that all this information (if it exists at all) is potentially spread across multiple systems and needs to be correlated and verified. Many an organization has failed an audit because of their lack of identity governance capabilities. At best much of an IT Manager and staff’s time are consumed trying to answer these audit requests by way of painful fire-drill which distract end-user and IT staff from more productive activities.

Wouldn’t it be nice if there were a set of technologies that could help automate the process of periodic access review, provisioning, role lifecycle management and attestation reviews? Enter Identity Governance Software Suites. Large vendors such as Oracle and CA have full solution suites that work together to integrate and automate these processes.

So let’s look a bit deeper into Identity Lifecycle Management. In particular, here we will focus on account and resource provisioning, ongoing access changes and eventual account and resource de-provisioning.


So Privileged Identity Management is a little bit different than standard user Identity Management? Why? Well the users we are talking about here have deep sets of sensitive access to systems and data across the organization. If these accounts are compromised in any way it can truly spell catastrophe for a given organization.

Accounts we are talking about here include root accounts, domain administrators, database admins, application administrators and so on.

These solutions can employee account check out and check in systems that are temporal in nature. With this type of solution if let’s say root account access is needed to a given server, then the admin can make a request and the system issues a temporary password to the admin. The admin then logs into an intermediary admin system and access the system from there.














Domain 6 – Cryptography




There are two types of encryption we will discuss in detail which are symmetric and asymmetric . Fundamentally symmetric key encryption uses the same key to encrypt and decrypt the data and Asymmetric encryption uses different keys to encrypt and decrypt information.


p<>{color:#000;}. Cipher – Algorithm used for encryption or decryption

p<>{color:#000;}. Block Cipher Definition/Description – Block ciphers involve the encryption of relative large blocks of data and encode each block separately. The same key is used per block and this key is usually a symmetric key.


p<>{color:#000;}. Stream Cipher Definition/Description – Stream ciphers also are symmetric key based however stream ciphers differ in numerous ways from block ciphers. Stream ciphers operate on small bits of data and typically execute at higher speeds with lower hardware requirements. Stream ciphers will typically use smaller keys.


p<>{color:#000;}. AES – (Advanced Encryption Standard) – Block Cipher that supports 128, 192 and 256 bit block lengths.


p<>{color:#000;}. Blowfish – Symmetric key block cipher. No effective cryptanalysis of it has been identified publicly to date. 64 bit block size with a variable key length from 32 to 448 bits.


Examples of Stream Ciphers include:

p<>{color:#000;}. RC4 – (Rivest Cipher 4) One of the most used ciphers worldwide. This cipher is a component of SSL and TLS for HTTP security and subsequently secures much of the transmission of private data across the web.


p<>{color:#000;}. HC-256 – Examples of protocols and programs that use Block and Stream Ciphers:


p<>{color:#000;}. SSH – Block Cipher encryption – Blowfish, 3des, AES256


p<>{color:#000;}. HTTP over TLS – Stream Cipher RC4 for symmetric key encryption.




Asymmetric encryption is also known as Public Key Cryptography. Typically used to exchange symmetric keys. PKI is often only used for key exchange and authentication.


PKI – Public Key Infrastructure leverages chains of trust using digital certificates. These digital certificates use public and private key pairs to establish chains of trust to core Certificate Authorities. These certificate authorities have their root certificates embedded into commonly used browsers such as Firefox, Chrome, Safari and IE.

Digital Certificates include digital signatures and public key information. Message authentication can take place by having the provider of the data (server) hash the message and sign it with a private key.


Advanced Cryptography


p<>{color:#000;}. Known-plaintext attacks are where the cryptanalyst has a block of plaintext and it’s matching ciphertext. The purpose of a known-plaintext attack is to determine the cryptographic key and potentially the algorithm which could be used to decrypt subsequent messages.


p<>{color:#000;}. A Chosen-plain text attack is where the cryptanalyst can pick and choose the cipher-texts for any given plain texts. The purpose of this type of attack is to reduce the security of the encryption scheme.


p<>{color:#000;}. A Chosen-Cipher text attack is when the cryptanalyst gathers data by choosing a ciphertext and obtaining its decryption under an unknown key. The cryptanalayst can enter one of more known ciphertexts into the system and obtain the resulting plaintexts.


p<>{color:#000;}. Substitution Ciphers as the name suggests are when the cryptographer substitutes various letters or numbers with other letters or numbers. The simplest and earliest example of a Substitution Cipher is the ‘Caesar Cipher’.



p<>{color:#000;}. One-Time Pads are also known as ‘Vernam Ciphers’. It is where plaintext is combined with a random key and is the only existing mathematically unbreakable type of encryption.


p<>{color:#000;}. PRNG’s generated sequences are note ‘Truly Random’ because it is completely determined by a relatively small set of initial valued which are referred to as the PRNG seed.






p<>{color:#000;}. AES can operate in one of several Block Cipher modes which include:

p<>{color:#000;}. Electronic Codebook Mode (ECB Mode)

p<>{color:#000;}. Cipher-Block Chaining Mode (CBC Mode)

p<>{color:#000;}. Cipher-Feedback Mode (CFB Mode)

p<>{color:#000;}. Output Feedback Mode (OFB Mode)

p<>{color:#000;}. Counter Mode (CTR Mode)



p<>{color:#000;}. Birthday attacks involve a statistically anomaly that exploits the mathematics behind the ‘Birthday problem’ in Probability theory. This attack can be used to abuse communication between two or more parties.


Message Authentication Codes


Message Authentication Codes and their role in securing network communications

Hashing Algorithms used and Resisting MAC attacks


A MAC or ‘message authentication code’, is a small piece of information which is used to authenticate a message (not the sender of the message). It also verifies the integrity of a message. Meaning that it provides a mechanism to ensure that a message hasn’t been altered in any way between the time it was hashed and the time that the hash was verified. MAC’s are generated and verified using a shared secret symmetric key while digital signatures by comparison are generated and verified by using public private key pairs.


Hashing algorithms are one way mathematical functions. Meaning that we input a set of values (in our case data) into a hashing algorithm and a unique value is output and that value cannot be used to derive the source.


Hashes have many practical applications in computing:

p<>{color:#000;}. Hashes are used in digital signatures in combination with public key cryptography to facilitate technologies such as TLS and IPsec.

p<>{color:#000;}. Hashes are also used in many instances to store password data. The reasoning is so that if the data field containing the password (in this case a hash of the password) then the password isn’t technically compromised. Many directories such as LDAP and Active Directory employ hashing for password fields.

p<>{color:#000;}. Hashes are also used for message authentication via HMAC which we will discuss in more detail shortly.



SHA512 is a Secure Hash Algorithm of type SHA2. 512bit key. SHA2 designed by the NSASHA-512 computed with 64-bit words





Cryptographic Algorithms


Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC requires smaller keys compared to non-ECC cryptography (based on plain Galois fields) to provide equivalent security.


Elliptic curves are applicable for encryption, digital signatures, pseudo-random generators and other tasks. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic curve factorization.


Quantum cryptography is the science of exploiting quantum mechanical properties to perform cryptographic tasks. The best known example of quantum cryptography is quantum key distribution which offers an information-theoretically secure solution to the key exchange problem. Currently used popular public-key encryption and signature schemes (e.g., RSA and ElGamal) can be broken by quantum adversaries. The advantage of quantum cryptography lies in the fact that it allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical (i.e. non-quantum) communication (see below for examples). For example, It is impossible to copy data encoded in a quantum state and the very act of reading data encoded in a quantum state changes the state. This is used to detect eavesdropping in quantum key distribution.


p<>{color:#000;}. RIPEMD was based upon the design principles used in MD4, and is similar in performance to the more popular SHA-1.


p<>{color:#000;}. Blowfish is a symmetric-key block cipher


p<>{color:#000;}. Twofish is a symmetric key block cipher with a block size of 128 bits and key sizes up to 256 bits.


p<>{color:#000;}. Elliptic curve Diffie–Hellman (ECDH) is an anonymous key agreement protocol that allows two parties, each having an elliptic curve public–private key pair, to establish a shared secret over an insecure channel.


p<>{color:#000;}. PAP – A password authentication protocol (PAP) is an authentication protocol that uses a password.



p<>{color:#000;}. Challenge-Handshake Authentication Protocol (CHAP) authenticates a user or network host to an authenticating entity.




Secure communication protocol is said to have forward secrecy if compromise of long-term keys does not compromise past session keys.


p<>{color:#000;}. Salting – hashes are great for providing fingerprints of blocks of data such as passwords but given enough time and resources they can be cracked. Salting adds seed info to help randomize the hash.


p<>{color:#000;}. Dictionary cracks – are The simplest way to crack a hash is to try to guess the password, hashing each guess, and checking if the guess’s hash equals the hash being cracked. If the hashes are equal, the guess is the password. The two most common ways of guessing passwords are dictionary attacks and brute-force attacks.


p<>{color:#000;}. A brute-force attack attempts every all combinations of characters up to a certain length. This is computationally expensive, and is generally the least efficient in terms of hashes cracked per processor time, but they will always eventually find the password.


p<>{color:#000;}. Lookup tables are an effective method for cracking many hashes of the same type very quickly. Lookup tables contain pre-computations of the hashes of the passwords in a password dictionary and store them with their corresponding password.


p<>{color:#000;}. Reverse Lookup Tables -allow an attacker to apply a dictionary or brute-force attack to many hashes at the same time, without having to pre-compute a lookup table.


p<>{color:#000;}. Rainbow tables are a time and memory trade-off technique. They are similar to lookup tables, except that they sacrifice hash cracking speed to make the lookup tables smaller.



Public Key Infrastructure


Public Key Cryptography uses Private and Public Key Pairs

Neither key can be derived from the other. Keep the private key secret. The public key can be shared publicly.


Certificate Authorities are trusted third parties

Applicants register with trusted third party through CSR (Certificate Signing Request)

Examples of an applicant might be the owner of a particular website domain such as example.com.


CSR – when applicant generates a public private key pair and registers their public key with the root or ‘signing’ CA.



p<>{color:#000;}. Encryption of Symmetric key exchange which enables message confidentiality.

p<>{color:#000;}. Public key encryption does not perform well. Resource intensive and slow

p<>{color:#000;}. Symmetric key encryption is fast. Public keys generally used to encrypt private keys


Digital Signatures use a sender’s “private key” to encrypt their identity typically by hashing the plaintext of the message sent.

Receiver decrypts digital signature using the sender’s public key by conducting a message digest (or hash) of the plaintext and then comparing that with the digital signature sent by the sender after it is decrypted using the sender’s public key


RSA consists of Public/Private Keys. Backbone of Internet Security. Used in HTTP/TLS, SSH, OpenPGP, S/MIME, Digital Signatures. Relies on computational difficulty of factoring large integers that are the product of two prime numbers.


Diffie Hellman – Not a type of mayonnaise. Used for negotiated Key exchange

Does not verify identity of the sender or receiver, it just allows you to negotiate a trusted key exchange. Key is negotiated partially by sender and receiver on the fly.


PKI Management

In cryptography X.509 is a ITU-T standard for a public key infrastructure (PKI) and Privilege Management Infrastructure (PMI). X.509 specifies, amongst other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm.


Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol.













[+ Now that you have consumed the study guide it’s time to check out………. ->->-> +]


The Security+ Security Boot Camp!

[* Includes bank of over 200 Security+ exam prep questions *]

[* Hands-On Security+ Labs accessible from any device  *]

[* 55 Pages of Security+ Exam Study Notes  *]

[* 11+ hours of Security+ Training Video  *]

Enter discount code “SECURITYPLUSNOW” at checkout.


[+ Click Here to Check Out the Security+ Boot Camp! +]




Twitter: @webofsecurity





Security+ Boot Camp Study Guide

There is a shortfall of over one million information security positions in the global marketplace today. Professionals are in demand and in short supply. The Security+ exam is your ticket to ride in the red hot cyber security industry. Learn about penetration testing, vulnerability scanning and incident response. Security professionals have many titles including forensic technicians, security analyst, security engineer, CISO (Chief Information Security Officer) and Red/Blue team specialists. The average compensation for security analysis is close to six figures. Stopping the bad guys and getting paid well to do it makes for a satisfying career choice. Get certified and get paid!

  • ISBN: 9781370078547
  • Author: Chad Russell
  • Published: 2016-12-21 19:50:17
  • Words: 16788
Security+ Boot Camp Study Guide Security+ Boot Camp Study Guide