CISSP Domain 2 Study Guide

CISSP Domain 2

The CISSP Domain 2 covers Asset security and makes up approximately 10% of the questions on the exam. This domain will include the following topics –

  • Data Security and Management Concepts.
  • Identify and categorize assets and information
  • Data Lifecycle.
  • Privacy.
  • Data Retention.
  • Data States.
  • Security Controls.
  • Asset Handling Requirements.
  • Data Remanence.

What is an asset? 

A Asset can be considered one of the following –

  • Private and Personal Data (such as PII)
  • Software
  • IT components 
  • Intellectual property
  • Brand
  • Reputation
  • Real estate/facilities

Asset inventory is a helpful tool to help organizations identify, locate, and classify their assets.  The components of an asset inventory might include the following:

  • Asset name
  • Asset location
  • Asset value
  • Asset owner
  • Asset classification
  • Annual cost of maintenance
  • Projected lifespan of the asset
  • Protection level required

The asset classification process consists of five steps:

  1. Create an Asset Inventory
  2. Assign Ownership
  3. Classify (Based on Value)
  4. Protect (Based on Classification)
  5. Assess and Review

There is an asset protection process that is similar but consists of three simpler steps:

  1. Identify, locate, and Value
  2. Classify (based on value)
  3. Protect (based on classification)

You can think of this as three letters, VCP = Value, Classify, and Protect.  Notice how each step relies on the prior step.  

Step 1 is self explanatory, however step 2  is where Ownership is determined.  Ownership is an important concept, as the owner is the best position to understand the value of the asset.  The owner’s responsibility is to classify the assets they own. The owner is also ultimately accountable for the asset’s protection, thus Accountability becomes an important concept for the exam.  Therefore, the owner must have adequate knowledge about the asset including regulations, business expectations, customer expectations, and the owner must use consistent classification methods.

Label is a method of labeling, or indicating the classification level of the asset.  Examples:

  • Stamping a physical device or document with “top secret” 
  • Renaming a file with “sensitive” in the filename, or certain server types with “federal” in the nomenclature
  • Putting “confidential” in the subject line of an email 

Step 3 is where a Baseline, or Minimum Security Requirements are established for each classification.

Types of data classifications

In the U.S., the two most widespread classification schemes are A) the government/military classification and B) the private-sector classification.

  • Top Secret — It is the highest level in this classification scheme. The unauthorized disclosure of such information can be expected to cause exceptionally grievous damage to national security.
  • Secret — Very restricted information. The unauthorized disclosure of such data can be expected to cause significant damage to national security.
  • Confidential — A category that encompasses sensitive, private, proprietary and highly valuable data. The unauthorized disclosure of such data can be expected to cause serious, noticeable damage to national security.

These three levels of data are collectively known as ‘Classified’ data.

  • Unclassified — It is the lowest level in this classification scheme. Furthermore, this data is neither sensitive nor classified, and hence it is available to anyone through procedures identified in the Freedom of Information Act (FOIA).

The private sector classification scheme is the one on which the CISSP exam is focused.

  • Confidential — It is the highest level in this classification scheme. This category is reserved for extremely sensitive data and internal data. A “Confidential” level necessitates the utmost care, as this data is extremely sensitive and is intended for use by a limited group of people, such as a department or a workgroup, having a legitimate need-to-know. A considerable amount of damage may occur for an organization given this confidential data is divulged. Proprietary data, among other types of data, falls into this category.
  • Private — Data for internal use only whose significance is great and its disclosure may lead to a significant negative impact on an organization. All data and information which is being processed inside an organization is to be handled by employees only and should not fall into the hands of outsiders.
  • Sensitive — A classification label applied to data that is treated as classified in comparison to the public data. Negative consequences may ensue if such kind of data is disclosed.
  • Public — The lowest level of classification whose disclosure will not cause serious negative consequences to the organization.

Types of sensitive data

Sensitive data is “any information that isn’t public or unclassified.” Sensitive data can be 4 kinds: confidential, proprietary, protected and other protected data. Also, one should learn these types of sensitive data:

Personally Identifiable Information (PII)

As the name suggests, this information can identify an individual. According to a definition by the National Institute of Standards and Technology (NIST), PII is information about an individual maintained by an agency which:

  1. can be used to distinguish or track an individual’s identity based on identifiers, such as name, date of birth, biometric records, social security number; and
  2. additional information that may identify a person – that is medical, financial, employment and educational information.

Organizations are obliged to protect PII, and there are many laws that impose requirements on companies to notify individuals whose data is compromised due to a data breach.

Protected Health Information (PHI)

PHI is any information on a health condition that can be linked to a specific person. It is a common misconception that only medical care providers, such as hospitals and doctors, are required to protect PHI. In fact, most employers collect PHI to provide or supplement healthcare policies. Thus, HIPPA compliance applies to the majority of organizations in the United States.

Proprietary information

Proprietary information is a very valuable company asset because it represents a product that is a mixture of hard work, internal dealings, and organizational know-how. This information is often confidential, and it can be within the following range of creations: software programs, source and object code, copyright materials, engineering drawings, designs, inventions (whether or not patent protected), algorithms, formulas, schemes, flowcharts, processes of manufacturing, marketing, trade secrets, pricing and financial data, etc.

If competitors manage to work their way to your proprietary information, the consequences may be grievous, since you may lose your competitive edge because of that. The defensive mechanisms related to copyright, patents, and trade secrets are, per se, insufficient to ensure the required level of protection for proprietary dataUnfortunately, many foreign entities tend to resort to unfair practices, for example, stealing proprietary data from their international business rivals. Beware also of disgruntled (former) employees.

Asset lifecycle concepts have changed a bit in the 2021 common body of knowledge (CBK). The general asset/data lifecycle still applies below:

  • Identify/classify – this is where the information is created or collected, and both value and ownership are determined here.
  • Secure – the information is now secured based on its value/classification, typically articulated as baselines.
  • Monitor – the value of the asset should be monitored for changes, as this will have an impact on protection levels that are applied.
  • Recover – as the asset values change, you’ll need the ability to recover from those changes.  Typically this is considered backups, redundancy, restoration activities.  
  • Dispose – disposal can happen in two ways:
    • Archive – long term storage, retention periods apply, owner determines.
    • Defensible Destruction – eliminating and destroying in a controlled, compliant, and legal method.  Entities should have policies for this.

Scoping involves removing baseline security controls that are not applicable, such as removing privacy controls where private data is nonexistent, whereas; 

Tailoring involves modifying the baseline to become more applicable, such as changing the application timeout requirement from 10 minutes of inactivity to five.

Supplementation involves adding platform-specific or environment-specific details to your controls, such as replacing the term “operating system” with “Windows”.

Data Security Concepts

Data policy should be part of the overall risk management program.

Data governance oversees the development of common data definitions, standards, requirements, and processes.

Data quality reflects the integrity and reliability of data. Quality control and assurance can ensure good data quality. Improving data quality aims to reduce errors of commission (Mistakes, or inacurrate transcription) and errors of omission (Something that had been left out).

Data documentation allows longevity and reuse of data. Through documentation, users can understand, content, context, and limits of data.It also facilitates the exchange of data and enables easier discovery and interoperability of data. Data documentation can be through metadata, readme files, file contents (file names, header area…)…

Data Lifecycle

It is important to understand the six stages that data goes through during its lifecycle :

  1. Create – obviously refers to creation or collection of the data. This might also be where we classify and value the data, and again, try to read between the lines with some of this stuff, this could be the step where we assign security requirements but not implement them just yet.
  2. Store – where to put the data as it is created/collected. This could be where we apply the protection levels (note: applying protections is different than “assigning” them). ISC2 says that the storage step is often done at the same time as the creation step.
  3. Use – processing of the data; using internally. It is typically unencrypted while “in process”.
  4. Share – sending the data outside to third parties; includes selling, publishing, data exchange agreements, etc. The common body of knowledge talks about having a digital rights management solution in place to control the flow of data, and a data loss prevention solution in place to detect information leakage.
  5. Archive – long term storage.  This is when it’s not regularly used, or basically when the data leaves active use. This is where things like the age of technology come into play, along with EOL, EOS, which need to be considered in terms of the data’s availability.  As always, protection levels at this phase depend on classification.
  6. Destruction – permanent destruction of the data.  The method of disposal depends on the data’s classification.

Data ownership

The transit of information must complete its life cycle successfully. The various entities that make the lifecycle successful include the data owners, data custodian, system owner, security administrator, supervisor and user. Each has a unique role in protecting the organization’s assets. 

  • The data owner is a manager who ensures data protection and determines the classification level. It is the person responsible and accountable for a particular set of data as well as a stakeholder in the collection, quality and accessibility of information.
  • The system owner controls the working of the computer that stores data. This involves the software and hardware configurations but also supports services like related clouds. This professional is responsible for the operation and maintenance of systems, their updating and patching as well as related procurement activities.
  • The data custodian is responsible for the protection of data through maintenance activities, backing up and archiving, preventing the loss or corruption and recovering data. 
  • The security administrator is responsible for ensuring the overall security of the entire infrastructure. These professionals perform tasks that lead to the discovery of vulnerabilities, monitor the network traffic and configure tools to protect the network (like firewalls and antivirus software). They also devise security policies, plans for business continuity and disaster recovery and train staff.
  • Supervisors are responsible for overseeing the activities of all the entities above and all support personnel. They ensure the entire team activities are conducted smoothly and that personnel is properly skilled for the tasks assigned.
  • Users have got to comply with rules, mandatory policies, standards and procedures. For instance, the user should not share their account or other confidential information with other colleagues. Users have access to data according to their roles and their need to access certain info.

FIPS 199 helps organizations categorize their information systems. The criteria to classify data is below:

  • Usefulness
  • Timeliness
  • Value
  • Lifetime
  • Disclosure Damage Assessment
  • Modification Damage Assessment
  • Security Implications (of use on a broad scale)
  • Storage

Security Testing and Evaluation

FISMA require every government agencies to pass Security Testing and Evaluation, a process that contain 3 categories :

  1. Management Controls focus on business process administration and risk management.
  2. Operational Controls focus on the processes that keep the business running.
  3. Technical Controls focus on processes or configuration on systems.

Clearance

Who has access to what. If a subject needs access to something they don’t have access to, a formal access approval process is to be followed. Furthermore, the subject must have a need to know.

Government

  1. Top secret
  2. Secret
  3. Confidential
  4. Sensitive (SBU, limited distribution)
  5. Unclassified

Private

  1. Confidential
  2. Private
  3. Sensitive
  4. Public

Data Ownership

  • Data Owners – usually management or senior management. They approve access to data.
  • Data Processors – those who read and edit the data regularly. Must clearly understand their responsibility with the data.
  • Data Remanence – recoverable data after deletion. Here’s how to not make the data recoverable:
    • Secure deletion by overwriting of data, using 1s and 0s.
    • Degaussing – removes or reduces magnetic fields on disk drives.
    • Destroying the media, by shredding, smashing, and other means.
  • Collection Limitation – important security control that’s often overlooked. Don’t collect data you don’t need. Create a Privacy Policy that specifies what data is collected and how it’s used.

Security Controls

For identifying security controls to implement for data in your organization, you can refer to well-known frameworks: ISO 27001, ISO 27002, NIST SP 800-53, CSIS 20 Critical Security Controls, COBIT, COSO, FISMA, FedRAMP (For Cloud Service Providers), DoD Instruction 8510.01.

When implementing the above frameworks, there might be some controls that are not applicable to the context of your organization. This is why you should consider scoping, tailoring, and supplementation.

  • Scoping : Choose only the security controls that are applicable.
  • Tailoring : Modify the applicable controls to meet the specific needs.
  • Supplementation : When additional security controls are needed.

It is important to document scoping and tailoring decisiong, and their justification.

There are 3 types of controls :

  • Technical : Using computer capabilities and automation to implement safeguards.
  • Administrative : Policies, procedures, standards and guidelines…
  • Physical : CCTV, Intrusion Detection, security guard…

Controls can be deterrent, preventative, detective, corrective, compensating or recovery. Controls can also be common, system-specific or hybrid.

It is important to establishing a security control baseline. Examples of standards and references that can help you with this include : Cisco Validated Design Program, Microsoft Security Compliance Toolkit 1.0, CIS Benchmarks,…

Asset Handling Requirements

Asset Handling should also cover access, transfer and storage of sensitive data.

Good practices in asset and data handling include :

  • Marking and Labeling
  • De-identification
  • Obfuscation
  • Data Tokenization
  • Anonymization

Data Remanence

Data remanence occurs when data destruction efforts prove to be non-sufficient, and some remains of data may still be recoverable.

There are 3 Techniques for media sanitization:

  • Clearing : Media is formatted or overwritten once. This is the least effective method, and you can still recover data if you rely on this technique alone.
  • Purging : Overwriting multiple time, degaussing (Only for magnetic media like HDD or tapes), crypto-shredding… By these techniques, you may never recover data.
  • Destruction : This is the most effective method, but Media would be entirely destroyed. Examples include incineration and disk shredding. However, drilling a hole in the media is not a good way to destroy it. Data can still be recovered.

Guidelines and standards for Media Sanitization include :

  • NSA/CSS Policy Manual 9-12
  • NIST SP 800-88 “Guidelines for Media Sanitization”.

Data Destruction Methods

  • Erasing — performs delete operation against a file.
  • Clearing — performs overwriting the media with random bits to prepare media for reuse. Cleared data cannot be recovered using traditional recovery tools.
  • Purging — Intense form of clearing. It repeats the clearing process multiple times and may combine with another method. I.e degaussing to remove all data remnants.
  • Degaussing — creates a strong magnetic field that erases data from magnetic tapes such as CDs, DVDs, or SSDs. Not suitable on hard disks.
  • Destruction — Final stage in lifecycle and the most secure method of sanitizing media. This includes incineration, crushing, shredding, disintegration and dissolving using caustic or acidic chemicals.

Data Attacks

  • XSRF is an attack that uses an existing session on a normal site. Think of this as session hijacking. The fix is to have validation through CAPTCHA or SMS before starting sessions.
  • Side-Channel attacks are on the system itself (hardware), as opposed to software. Information of worth include timing information, power consumption, electromagnetic leaks, and even sound. Here are a few examples:
    • Cache attack – attacks based on attacker’s ability to monitor cache accesses made by the victim in a shared physical system as in virtualized environment or a type of cloud service.
    • Timing attack – attacks based on measuring how much time various tasks take to perform.
    • Power-monitoring attack – attacks that make use of varying power consumption by the hardware during computation.
    • Electromagnetic attack – attacks based on leaked electromagnetic radiation, which can directly provide plaintext and other information. Such measurements can be used to infer cryptographic keys using techniques equivalent to those in power analysis or can be used in non-cryptographic attacks. For example, TEMPEST (Van Eck phreaking or radiation monitoring) attacks.
    • Acoustic cryptanalysis – attacks that exploit sound produced during a computation, rather like power analysis.
    • Differential fault analysis – in which secrets are discovered by introducing faults in a computation.
    • Data remanence – in which sensitive data are read after supposedly having been deleted. (Cold boot attack)
    • Software-initiated fault attacks – Currently a rare class of side-channels, Row hammer is an example in which off-limits memory can be changed by accessing adjacent memory too often (causing state retention loss).
    • Optical – in which secrets and sensitive data can be read by visual recording using a high resolution camera, or other devices that have such capabilities.
  • Meet In The Middle Attack is a generic space–time trade off cryptographic attack against encryption schemes which rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason why Double DES is not used and why a Triple DES key (168-bit) can be brute forced by an attacker.
  • skimmer is a device installed on an ATM or device where user slide its card in it. The skimmer read the card magnetic strip or scan the card number.

How to develop a retention policy?

There are three fundamental questions that every retention policy must answer: 

1. How to retain data: the data should be kept in a manner so that it is accessible whenever required. To make this accessibility certain, the organization should consider some issues:

  • The taxonomy is the scheme for data classification. This classification involves various categories, including the functional (human resource and product developments), the organizational (executive and union employee) or any combination of these. 
  • The normalization develops tagging schemes that ensure that the data is searchable. Non-normalized data is kept in various formats such as audio, video, PDF files and more. 

2. How long to retain data: the classical data retention longevity approaches were “the keep everything” camp and “the keep nothing” camp. But in modern times, these approaches are dysfunctional in many circumstances, particularly when an organization encounters a lawsuit. 

Unfortunately, there is no universal pact on data retention policies. Nevertheless, the rules of thumb or general guidelines for data retention longevity are described in table one, which is taken from the Comparitech CISSP Cheat Sheet series that gives typical data retention durations.

3. What data to retain: the data related to business management, third-party dealings or partnership is valuable for any organization.


Source link