Category Archives: Information Security

Cyber Security Project Investment Proposal – DIA Needipedia – Fight Cybercrime and Cyber Jihad With Sensors – Grab Your Copy Today!

Dear blog readers, I decided to share with everyone a currently pending project investment proposal regarding the upcoming launch of a proprietary Technical Collection analysis platform with the project proposal draft available on request part of DIA's Needipedia Project Proposal Investment draft or eventually through the Smith Richardson Foundation. In case you're interested in working with me

Australia and Chinese Taipei Join the APEC CBPR System

On November 23, 2018, both Australia and Chinese Taipei joined the APEC Cross-Border Privacy Rules (“CBPR”) system. The system is a regional multilateral cross-border transfer mechanism and an enforceable privacy code of conduct and certification developed for businesses by the 21 APEC member economies.

The Australian Attorney-General’s Department recently announced that APEC endorsed Australia’s application to participate and that the Department plans to work with both the Office of the Australian Information Commissioner and organizations to implement the CBPR system requirements in a way that ensures long-term benefits for Australian businesses and consumers.

In Chinese Taipei, the National Development Council announced that Chinese Taipei has joined the system. According to the announcement, Chinese Taipei’s participation will spur local enterprises to seek overseas business opportunities and help shape conditions conducive to cross-border digital trade.

Australia and Chinese Taipei become the seventh and eighth countries to participate in the system, joining the U.S., Mexico, Canada, Japan, South Korea and Singapore. Both nations’ decisions to join the system further highlights the growing international status of the CBPR system, which implements the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Several other APEC economies are actively considering joining.

Argentina DPA Issues Guidelines on Binding Corporate Rules

The Agency of Access to Public Information (Agencia de Acceso a la Información Pública) (“AAIP”) has approved a set of guidelines for binding corporate rules (“BCRs”), a mechanism that multinational companies may use in cross-border data transfers to affiliates in countries with inadequate data protection regimes under the AAIP.

As reported by IAPP, pursuant to Regulation No. 159/2018, published December 7, 2018, the guidelines require BCRs to bind all members of a corporate group, including employees, subcontractors and third-party beneficiaries. Members of the corporate group must be jointly liable to the data subject and the supervisory authority for any violation of the BCRs.

Other requirements include:

  • restrictions on the processing of special categories of personal data and on the creation of files containing personal data relating to criminal convictions and offenses;
  • protections such as providing for the right to object to the processing of personal data for the purpose of unsolicited direct marketing;
  • complaint procedures for data subjects that include the ability to institute a judicial or administrative complaint using their local venue; and
  • data protection training to personnel in charge of data processing activities.

BCRs also should contemplate the application of general data protection principles, especially the legal basis for processing, data quality, purpose limitation, transparency, security and confidentiality, the data subjects’ rights, and the restriction to subsequent cross-border data transfer to non-adequate jurisdictions. BCRs that do not reflect the guidelines’ provisions must submit the relevant material to the AAIP for approval within 30 calendar days from the date of transfer. Approval is not required if BCRs that track the guidelines are used.

Lisa Sotto, Head of Hunton’s Privacy and Cybersecurity Practice, Kicks Off FTC Data Security Panel

In connection with its hearings on data security, the Federal Trade Commission hosted a December 12 panel discussion on “The U.S. Approach to Consumer Data Security.” Moderated by the FTC’s Deputy Director for Economic Analysis James Cooper, the panel featured private practitioners Lisa Sotto, from Hunton Andrews Kurth, and Janis Kestenbaum, academics Daniel Solove (GW Law School) and David Thaw (University of Pittsburgh School of Law), and privacy advocate Chris Calabrese (Center for Democracy and Technology). Lisa set the stage with an overview of the U.S. data security framework, highlighting the complex web of federal and state rules and influential industry standards that result in a patchwork of overlapping mandates. Panelists debated the effect of current law and enforcement on companies’ data security programs before turning to the “optimal” framework for a U.S. data security regime. Among the details discussed were establishing a risk-based approach with a baseline set of standards and clear process requirements. While there was not uniform agreement on the specifics, the panelists all felt strongly that federal legislation was warranted, with the FTC taking on the role of principal enforcer.

View an on-demand recording of the hearing. For more information on the data security hearings, visit the FTC’s website.

Professionally Evil Insights: Professionally Evil CISSP Certification: Breaking the Bootcamp Model

ISC2 describes the CISSP as a way to prove “you have what it takes to effectively design, implement and manage a best-in-class cybersecurity program”.  It is one of the primary certifications used as a stepping stone in your cybersecurity career.   Traditionally, students have two different options to gain this certification; self-study or a bootcamp.  Both of these options have pros and cons, but neither is the best.

Bootcamps are a popular way to cram for the certification test.  Students spend five days in total immersion into the topics of the CBK.  This is an easy way to pass the exam for lots of students because it focuses them on the CISSP study materials for the bootcamp timeframe.  But there are a few negatives to this model.  First is the significant cost.  The typical prices we see are between $3500 and 5000 with outliers as high as almost $7000.  The second issue is that it takes the student away from their life for the week.  Finally, most people finish the bootcamp with the knowledge to pass the exam but since it is crammed in, they quickly forget most of the information.

Self-Study is the other common mechanism for studying for the CISSP exam.  This allows a dedicated student to learn the information at their pace and time frame.  It also allows for them to decide how much to spend.  From books to online videos and practice exams the costs vary.  The main problem with the method is that students often get distracted by life and work while trying to accomplish it.

But there is an answer that combines the benefits of both previous options.  Secure Ideas has developed a mentorship program designed to provide the knowledge necessary to pass the certification, while working through the common body of knowledge (CBK).  All done in a manner that encourages retention of the knowledge.  And it is #affordabletraining!

The mentorship program is designed as a series of weekly mentor led discussion and review sessions along with various student support and communication methods, spanning a total of 9 weeks.  These work together to provide the student a solid foundation to not only help in passing the certification but to continue as a collection of information for everyday work.   This class is set up to cover the 8 domains of the ISC2 CBK:

  • Security and Risk Management
  • Asset Security
  • Security Architecture and Engineering
  • Communication and Network Security
  • Identity and Access Management (IAM)
  • Security Assessment and Testing
  • Security Operations
  • Software Development Security

The Professionally Evil CISSP Mentorship program uses multiple communication and knowledge sharing paths to build a comprehensive learning environment focused on both passing the CISSP certification and gaining a deep understanding of the CBK.

The program consists of the following parts:

  • Official study guide book
  • Weekly live session with instructor(s)
    • Live session will also be recorded
  • Private Slack team for students and instructors to communicate regularly
  • Practice exams
  • While we believe students will pass on their first try, we also include the option for students to take the program as many times as they want, any time we offer it.  🙂

You can sign up for the course over at https://attendee.gototraining.com/r/2538511060126445313 for only $1000.  Our early bird pricing is $800 and is good until January 31.  Just use the Coupon code EARLYBIRD at checkout.  Veterans, active duty military and first responders also get a significant discount.  Email info@secureideas.com for more information.



Professionally Evil Insights

Infosecurity.US: The Tracking of America: Why Are You Letting It Happen?

Why are both Apple Inc. (NASDAQ: AAPL) and Google Inc. (NASDAQ: GOOG) still permitting clearly ill-conceived user tracking via applications marketed and sold on each company's customer-facing app stores? Surely your privacy and freedom means more to you than the false-and-temporary-convenience of finger, voice and script actuated conveyances of information best retreived in another manner.



Infosecurity.US

Why the CISO’s Voice Must be Heard Beyond the IT Department

In a recent company board strategy meeting the CFO presented the financial forecast and outcome and made some interesting comments about fiscal risks and opportunities on the horizon. The COO

The post Why the CISO’s Voice Must be Heard Beyond the IT Department appeared first on The Cyber Security Place.

#2018InReview Security Culture

Companies understand that organizational culture is an important differentiator to set their company apart from the competition. However, joining the dots between culture and information security management has taken some

The post #2018InReview Security Culture appeared first on The Cyber Security Place.

Infosecurity.US: Mastercard + Microsoft Questionable Claims In The Development Of Universal Identity Management Solutions

 Image Credits: ( CC BY-SA 2.0 ) by  americanbulldogbully007

Image Credits: (CC BY-SA 2.0) by americanbulldogbully007

Soup To Nuts Identity Solutions From Two Of The Reasons Why Security Flaws Persist In Financial and Computational Systems?

Bad news: Microsoft Corporation (NASDAQ: MSFT) and Mastercard Inc. (NYSE: MA) have entered the Identity and Accees Management space as dual-developers of a so-called 'universal identity scheme'.

I, for one, will utilize my barely visible thumb whorls as proof of identity, rather than use of these clowns-of-code-combinatorial-output. Code Complete at Microsoft or Mastercard? Puhleaze... The former can barely patch it's own desktop and server code successfully month-to-month, and the latter suffers from declining security capabilities since the failed-for-purpose-deployment-and-implementation of the so-called security chips in newly issued credit cards. Both comapanies suffer from proverbial lack-of-focused-leadership on their core businesses.



Infosecurity.US

FTC Seeks Public Comment on Identity Theft Rules

On December 4, 2018, the Federal Trade Commission published a notice in the Federal Register indicating that it is seeking public comment on whether any amendments should be made to the FTC’s Identity Theft Red Flags Rule (“Red Flags Rule”) and the duties of card issuers regarding changes of address (“Card Issuers Rule”) (collectively, the “Identity Theft Rules”). The request for comment forms part of the FTC’s systematic review of all current FTC regulations and guides. These periodic reviews seek input from stakeholders on the benefits and costs of specific FTC rules and guides along with information about their regulatory and economic impacts.

The Red Flags Rule requires certain financial entities to develop and implement a written identity theft detection program that can identify and respond to the “red flags” that signal identity theft. The Card Issuers Rule requires that issuers of debit or credit cards (e.g., state credit unions, general retail merchandise stores, colleges and universities, and telecom companies) implement policies and procedures to assess the validity of address change requests if, within a short timeframe after receiving the request, the issuer receives a subsequent request for an additional or replacement card for the same account.

The FTC is seeking comments on multiple issues, including:

  • Is there a continuing need for the specific provisions of the Identity Theft Rules?
  • What benefits have the Identify Theft Rules provided to consumers?
  • What modifications, if any, should be made to the Identify Theft Rules to reduce any costs imposed on consumers?
  • What modifications, if any, should be made to the Identify Theft Rules to increase their benefits to businesses, including small businesses?
  • What evidence is available concerning the degree of industry compliance with the Identify Theft Rules?
  • What modifications, if any, should be made to the Identify Theft Rules to account for changes in relevant technology or economic conditions?

The comment period is open until February 11, 2019, and instructions on how to make a submission to the FTC are included in the notice.

The Only Counter Strategy Against Data Loss: Reliable Backup Methodology

By Julia Sowells Senior Information Security Specialist at Hacker Combat, In the turn of the century 18 years ago, people have embraced Web 2.0, a new dynamic web replacing the static

The post The Only Counter Strategy Against Data Loss: Reliable Backup Methodology appeared first on The Cyber Security Place.

The Importance of “S” in “CISO”

A Chief Information Security Officer is the brigadier general of the security force of an organization. While the c-suite normally looks at the financial and overall management of an organization,

The post The Importance of “S” in “CISO” appeared first on The Cyber Security Place.

Serbia Enacts New Data Protection Law

On November 9, 2018, Serbia’s National Assembly enacted a new data protection law. The Personal Data Protection Law, which becomes effective on August 21, 2019, is modeled after the EU General Data Protection Regulation (“GDPR”).

As reported by Karanovic & Partners, key features of the new Serbian law include:

  • Scope – the Personal Data Protection Law applies not only to data controllers and processors in Serbia but also those outside of Serbia who process the personal data of Serbian citizens.
  • Database registration – the Personal Data Protection Law eliminates the previous requirement for data controllers to register personal databases with the Serbian data protection authority (“DPA”), though they will be required to appoint a data protection officer (“DPO”) to communicate with the DPA on data protection issues.
  • Data subject rights – the new law expands the rights of data subjects to access their personal data, gives subjects the right of data portability, and imposes additional burdens on data controllers when a data subject requests the deletion of their personal data.
  • Consent – the Personal Data Protection Law introduces new forms of valid consent for data processing (including oral and electronic) and clarifies that the consent must be unambiguous and informed. The prior Serbian data protection law only recognized handwritten consents as valid.
  • Data security – the new law requires data controllers to implement and maintain safeguards designed to ensure the security of personal data.
  • Privacy by Design – the new law obligates data controllers to implement privacy by design when developing new products and services and to conduct data protection impact assessments for certain types of data processing.
  • Data transfers – the Personal Data Protection Law expands the ways in which personal data may be legally transferred from Serbia. Previously, data controllers were required to obtain the approval of the Serbian DPA for any transfers of personal data to non-EU countries. The new law permits personal data transfers based on standard contractual clauses and binding corporate rules approved by the Serbian DPA. Organizations can also transfer personal data to countries deemed to provide an adequate level of data protection by the EU or the Serbian DPA or when the data subject consents to the transfer.
  • Data breaches – like the GDPR, the new law requires data controllers to notify the Serbian DPA within 72 hours of a data breach and will require them to notify individuals if the data breach is likely to result in a high risk to the rights and freedoms of individuals. Data processors must also notify the relevant data controllers in the event of a data breach.

The new law also imposes penalties for noncompliance, but these are significantly lower than those contained in the GDPR. The maximum fines in the new Serbian law are only 17,000 Euros, while the maximum fines in the GDPR can reach up to 20 million Euros or 4% of an organization’s annual global turnover.

In 2012, Lisa Sotto, partner and chair of the Privacy and Cybersecurity practice at Hunton Andrews Kurth, advised the Serbian government on steps to enhance Serbia’s data protection framework.

Radware Blog: Evolving Cyberthreats: It’s Time to Enhance Your IT Security Mechanisms

For years, cybersecurity professionals across the globe have been highly alarmed by threats appearing in the form of malware, including Trojans, viruses, worms, and spear phishing attacks. And this year was no different. 2018 witnessed its fair share of attacks, including some new trends: credential theft emerged as a major concern, and although ransomware remains […]

The post Evolving Cyberthreats: It’s Time to Enhance Your IT Security Mechanisms appeared first on Radware Blog.



Radware Blog

Professionally Evil Insights: Spring Break without Breaking the Bank: Hands On Training

Over the last eight years, one of the main focuses of Secure Ideas has been education.  One responsibility we take very seriously is that of growing the skills within our clients and the public, with the objective of raising the bar in security.  This mindset and core passion of Secure Ideas is because we all believe that we stand on the shoulders of giants. As each of us has grown into the roles we currently hold, we were not only shaped and developed by our own experiences, but also by the knowledge shared by others.  This desire to learn and grow is one of the main things that make me proud to be a part of the security community.

However, there are a couple of significant problems with our industry:  First, information security needs are growing faster than skilled personnel are learning.  Second, the cost of training has increased outrageously over the past decade.

The first issue has been discussed for almost as long as I have been involved in information security.  Even Alan Paller of the SANS Institute has been speaking about the skills gap for over a decade!  The second issue is even worse as it makes it harder to fix the first.  Training costs for a single class often exceed $5000 without even factoring in travel and the time away from work. So how do we fix this?

At Secure Ideas, we have decided that it is our responsibility as active practitioners to help fix this lack of affordable training and help address the skills gap.  To that end, we are committed to the following for 2019:

  1. First, we want to announce our Professionally Evil Spring Break event.  This 3-day event will host two classes; Professionally Evil Network Security and Professionally Evil Application Security.  The first will focus on network penetration testing and the second focuses on application security and assessments. Either class is only $750, discounted to an early bird price of $600 until January 18, 2019.  Moreover veterans, active duty military and first responders get either for 50% off.
  2. Second, our Secure Ideas Training site has recorded classes starting at $25 each and vets get them for free!  And our webcasts will continue to be run as often as we can.
  3. Third, we will continue to support and release our open-source training products such as SamuraiWTF and the Professionally Evil Web Penetration Testing 101 course.

We hope that together we can all help increase the skills of our industry and provide affordable training for all.  Let us know if you have any questions or if you would like us to run a private training for your organization.



Professionally Evil Insights

Introducing Incident Handling & Response Professional (IHRP)

We are introducing the Incident Handling & Response Professional (IHRP) training course on December 11, 2018. Find out more and register for an exciting preview webinar.

No matter the strength of your company’s defense strategy, it is inevitable that security incidents will happen. Poor and/or delayed incident response has caused enormous damages and reputational harm to Yahoo, Uber, and most recently Facebook, to name a few. For this reason, Incident Response (IR) has become a crucial component of any IT Security department and knowing how to respond to such events is growing to be a more and more important skill.

Aspiring to switch to a career in Incident Response? Here’s how our new Incident Handling & Response Professional (IHRP) training course can help you learn the necessary skills and techniques for a successful career in this field.

Incident Handling & Response Professional (IHRP) 

The Incident Handling & Response Professional course (IHRP) is an online, self-paced training course that provides all the advanced knowledge and skills necessary to:

  • Professionally analyze, handle and respond to security incidents, on heterogeneous networks and assets
  • Understand the mechanics of modern cyber attacks and how to detect them
  • Effectively use and fine-tune open source IDS, log management and SIEM solutions
  • Detect and even (proactively) hunt for intrusions by analyzing traffic, flows and endpoints, as well as utilizing analytics and tactical threat intelligence

This training is the cornerstone of our blue teaming course catalog or, as we called it internally, “The PTP of Blue Team”.

Discover This Course & Get An Exclusive Offer

Take part in an exciting live demonstration and discover the complete syllabus of our latest course, Incident Handling & Response Professional (IHRP), on December 11. During this event, all the attendees will get their hands on an exclusive launch offer. Stay tuned! 😉

Be the first to know all about this modern blue teaming training course, join us on December 11.
> RESERVE YOUR SEAT

Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

Online Shopping Safety Tips For The Holidays

The holidays are just around the corner and the rush to purchase gifts online is well under way. While retailers scramble to create eye-catching promotions, deep in the underground, the

The post Online Shopping Safety Tips For The Holidays appeared first on The Cyber Security Place.

CIPL Responds to NTIA Request for Comment on Developing the Administration’s Approach to Consumer Privacy

The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP recently submitted formal comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.

In its comments, CIPL commends NTIA for initiating a renewed national debate on updating the U.S. privacy framework, and notes that its approach—starting with the intended outcomes and goals of any privacy regime—is well suited to lay the foundation for a legislative proposal in the future.

Responding to the questions raised in the request for comment, CIPL makes the following observations and recommendations with regard to NTIA’s intended core outcomes and the high level goals of any new U.S. privacy framework:

Privacy Outcomes

  • Transparency: CIPL agrees transparency should be a key outcome of any privacy framework and must be user-centric, contextual and tailored toward the specific audience and purpose. This can be achieved by implementing companywide privacy management and accountability frameworks.
  • Control: CIPL believes that control should be a component of a new privacy framework in contexts where it is appropriate, and should reference mechanisms that empower consumers beyond individual choice or consent. However, the framework’s general focus should be putting the onus on organizations to use data responsibly and accountably to protect consumers from harm regardless of their individual level of engagement.
  • Reasonable Minimization: CIPL supports the inclusion of reasonable minimization as an outcome of a new data protection framework, and further agrees with NTIA’s qualification that minimization should be reasonable and appropriate to the context and risk of privacy harm. These qualifiers are very important given the enormous potential of personal data for driving economic growth and societal benefits in the digital economy.
  • Security: CIPL fully agrees with the inclusion of security in the list of outcomes, and notes the importance of allowing organizations flexibility in determining security measures that are reasonable and appropriate to the context at hand. In addition, a security outcome should provide for the adoption of appropriate breach response measures (e.g., notification requirements) and should permit organizations to use personal data for the development and implementation of security tools and related legitimate purposes, such as incident prevention, detection and monitoring.
  • Access and Correction: While CIPL agrees that access, correction and deletion is an important outcome, such rights cannot be absolute and should not interfere with relevant obligations of an organization, other societal goals or legal rights of consumers and other third parties. Where exercising such rights would be inappropriate or impose unreasonable burdens on organizations, part of the solution lies in providing assurances to consumers that their personal information is protected by the full range of available accountability measures and will not be used for harmful purposes.
  • Risk Management: CIPL welcomes NTIA’s characterization of risk management as the “core” of its approach to privacy protection. Identifying harms and addressing them specifically has the advantage of enabling organizations to prioritize their compliance measures and focus resources on what is most important, thereby strengthening both consumer privacy and organizations’ ability to engage in legitimate and accountable uses of personal information. It also means that we do not need to establish set categories of so-called sensitive information or certain predetermined high-risk processing activities, as any actual sensitivity or high-risk character will be determined and addressed in each risk assessment process.
  • Accountability: CIPL strongly agrees with including accountability in the essential outcomes of a privacy framework. It is a key building block of modern data protection and is essential for the future of the digital society where laws alone cannot deliver timely, flexible and innovative solutions. CIPL recommends that NTIA clarify and elaborate upon this important concept in line with its globally accepted meaning, including in the APEC Privacy Framework and the GDPR, as well as other relevant international privacy regimes that incorporate this concept.
  • Complaint-handling and Redress: In addition to the above outcomes, CIPL recommends the additional outcome of complaint-handling and redress. Consumers should be able to expect that organizations are able to reliably, quickly and effectively respond to actionable complaints and provide redress where appropriate. As it is consumer-facing, it should be a separately stated outcome that consumers can expect from a privacy framework.

High-Level Goals for Federal Action

  • Harmonization: CIPL supports the effort to harmonize the U.S. privacy framework on the federal level, including through federal legislation that preempts inconsistent state privacy laws. CIPL recommends that NTIA clarify whether the proposed framework intends to cover employees, and suggests that a new framework should be focused on privacy in the consumer and commercial context and that the precise term “consumer” be defined to avoid legal uncertainty.
  • Legal Clarity and Flexibility to Innovate: Clarity and flexibility in a privacy framework can be achieved through an approach based on organizational accountability risk assessment. With respect to risk, agreement around methodologies for privacy assessments, guidance on types of risk and the sharing of organizational best practices can also significantly contribute to legal clarity without undermining the flexibility to innovate.
  • Comprehensive Application: CIPL supports a comprehensive baseline privacy law that applies to all organizations, preempts inconsistent state laws, amends or replaces inconsistent federal privacy laws where appropriate, and otherwise works with or around well-functioning existing sectoral laws.
  • Risk and Outcome-based Approach: CIPL agrees with the goal of creating a risk and outcome based approach to privacy regulation. Employing such an approach places the burden of protecting consumers directly where it belongs – on businesses that use personal data, rather than on consumers, who in an increasing number of contexts should not and realistically cannot be tasked with understanding in detail and managing for themselves complex data uses or constantly making choices about them.
  • Interoperability: Maximizing interoperability between different legal and privacy regimes should be a top priority goal for the United States. Any new privacy framework for the U.S. should continue to enable the free, responsible and accountable flow of data across borders.
  • Incentivizing Privacy Research: CIPL fully agrees with the goal of having the U.S. government encourage and incentivize research into and development of products and services that improve privacy protections. However, this goal should be broadened and amplified along the lines of the argument for incentivizing organizational accountability generally. This enables a race to the top whereby organizations not only strive to comply with the bare minimum of what is legally required but are incentivized and rewarded for heightened levels of organizational accountability that benefit all stakeholders.
  • FTC Enforcement: CIPL agrees that the Federal Trade Commission should be the principal federal agency to enforce any new comprehensive U.S. privacy legislation and should be appropriately resourced as such. Exactly how a new privacy framework and the FTC as the principal federal agency should interact with other federal functional regulators and sectoral privacy laws should be carefully considered and worked out with input from all relevant stakeholders.
  • Scalability: CIPL agrees that enforcement should be proportionate to the scale and scope of the information an organization is handling and should be outcome-based. With increased responsibilities under a broader privacy law, the FTC will have to ensure that its current approach is adapted to the changes in the scope and nature of its responsibilities.
  • Enabling Effective Use of Personal Information: In addition to the above goals for federal action, CIPL suggests the additional goal of enabling broad and effective uses of personal information for the benefit of economic development and societal progress, as well as for the benefit of individuals, particularly the data subjects. Due to their supervisory position, modern data protection and privacy enforcement authorities have the responsibility, in addition to protecting consumer privacy, to safeguard and facilitate the beneficial potential of such information and, therefore, the full range of responsible and accountable data uses.

Following consideration of the comments it receives, CIPL recommends that NTIA takes a holistic and deliberate approach toward developing a comprehensive privacy law that accomplishes the items discussed in the request for comment. One possible next step could be to actually articulate the outcomes and goals in draft legislative language to provide a clearer basis for further discussion on the precise elements and articulation of each of them. CIPL recommends an iterative process between NTIA and other public and private sector stakeholders towards that goal.

Building a Security Awareness Program

At the second annual Infosecurity North America conference at the Jacob Javits Convention Center in New York, Tom Brennan, US chairman, CREST International, moderated a panel called Securing the Workforce: Building, Maintaining and Measuring

The post Building a Security Awareness Program appeared first on The Cyber Security Place.

EU Commission Responds to NTIA Request for Comment on Developing the Administration’s Approach to Consumer Privacy

On November 9, 2018, the European Commission (“the Commission”) submitted comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.

In its comments, the Commission welcomes and agrees with many of the high-level goals identified by NTIA, including harmonization of the legal landscape, incentivizing privacy research, employing a risk-based approach and creating interoperability at a global level. The Commission also welcomes that the key characteristics of a modern and flexible privacy regime (i.e., an overarching law, a core set of data protection principles, enforceable individual rights and an independent supervisory authority with effective enforcement powers) are also at the core of NTIA’s proposed approach to consumer privacy. The Commission structured its specific suggestions around these key characteristics.

In particular, the Commission makes specific suggestions around:

  • Harmonization: The Commission notes that overcoming regulatory fragmentation associated with an approach based on sectoral law in favor of a more harmonized approach would create a level playing field, and provide necessary certainty for organizations while ensuring consistent protection for individuals.
  • Ensuring Trust: The Commission recommends that ensuring trust should guide the development of the US privacy policy formulation, and notes that giving individuals more control over their data will increase trust levels with organizations and in turn result in a greater willingness to share data on the part of consumers.
  • Data Protection Principles: The Commission commends NTIA on the inclusion of certain core data protection principles such as reasonable minimization, security, transparency and accountability, but suggests the further explicit inclusion of other principles such as lawful data processing (i.e., the requirement to process data pursuant to a legal basis, such as consent), purpose specification, accuracy and specific protections for sensitive categories of data.
  • Breach Notification: The Commission suggests the specific inclusion of a breach notification requirement to enable individuals to protect themselves from and mitigate any potential harm that might result from a data breach. While there are already state breach notification laws in place, the Commission believes organizations and individuals could benefit from the harmonization of such rules.
  • Individual Rights: The Commission believes that any proposal for a privacy regime should go beyond the inclusion of only traditional individual rights, such as access and correction, and should include other rights regarding automated decision-making (e.g., the right to explanation or to request human intervention) and rights around redress (e.g., the right to lodge a complaint and have it addressed, and the right to effective judicial redress).
  • Oversight and Enforcement: The Commission notes that the effective implementation of privacy rules critically depends on having robust oversight and enforcement by an independent and well-resourced authority. In this regard, the Commission recommends strengthening the FTC’s enforcement authority, the introduction of mechanisms to ensure effective resolution of individual complaints and the introduction of deterrent sanctions.

The Commission notes in its response that while this consultation only covers a first step in a process that might lead to federal action, it stands ready to provide further comments on a more developed proposal in the future.

NTIA’s request for comments closed on November 9, 2018 and NTIA will post the comments it received online shortly.

 

Privacy Advocacy Organization Files GDPR Complaints Against Data Brokers

On November 8, 2018, Privacy International (“Privacy”), a non-profit organization “dedicated to defending the right to privacy around the world,” filed complaints under the GDPR against consumer marketing data brokers Acxiom and Oracle. In the complaint, Privacy specifically requests the Information Commissioner (1) conduct a “full investigation into the activities of Acxiom and Oracle,” including into whether the companies comply with the rights (i.e., right to access, right to information, etc.) and safeguards (i.e., data protection impact assessments, data protection by design, etc.) in the GDPR; and (2) “in light of the results of that investigation, [take] any necessary further [action]… that will protect individuals from wide-scale and systematic infringements of the GDPR.”

The complaint alleges that the companies’ processing of personal data neither comports with the consent and legitimate interest requirements of the GDPR, nor the GDPR’s principles of:

  • transparency (specifically relating to sources, recipients and profiling);
  • fairness (considering individuals’ reasonable expectations, the lack of a direct relationship, and the opaque nature of processing);
  • lawfulness (including whether either company’s reliance on consent or legitimate interest is justified);
  • purpose limitation;
  • data minimization; and
  • accuracy.

The complaint emphasizes that Acxiom and Oracle are illustrative of the “systematic” problems in the data broker and AdTech ecosystems, and that it is “imperative that the Information Commissioner not only investigate[] these specific companies, but also take action in respect of other relevant actors in these industries and their practices.”

In addition to the complaint against Acxiom and Oracle, Privacy submitted two separate joined complaints against credit reference data brokers Experian and Equifax, and AdTech data brokers Quantcast, Tapad and Criteo.

CNIL Publishes DPIA Guidelines and List of Processing Operations Subject To DPIA

On November 6, 2018, the French Data Protection Authority (the “CNIL”) published its own guidelines on data protection impact assessments (the “Guidelines”) and a list of processing operations that require a data protection impact assessment (“DPIA”). Read the guidelines and list of processing operations (in French).

CNIL’s Guidelines

The Guidelines aim to complement guidelines on DPIA adopted by the Article 29 Working Party on October 4, 2017, and endorsed by the European Data Protection Board (“EDPB”) on May 25, 2018. The CNIL crafted its own Guidelines to specify the following:

  • Scope of the obligation to carry out a DPIA. The Guidelines describe the three examples of processing operations requiring a DPIA  provided by Article 35(3) of the EU General Data Protection Regulation (“GDPR”). The Guidelines also list nine criteria the Article 29 Working Party identified as useful in determining whether a processing operation requires a DPIA, if that processing does not correspond to one of the three examples provided by the GDPR. In the CNIL’s view, as a general rule a processing operation meeting at least two of the nine criteria requires a DPIA. If the data controller considers that processing meeting two criteria is not likely to result in a high risk to the rights and freedoms of individuals, and therefore does not require a DPIA, the data controller should explain and document its decision for not carrying out a DPIA and include in that documentation the views of the data protection officer (“DPO”), if appointed. The Guidelines make clear that a DPIA should be carried out if the data controller is uncertain. The Guidelines also state that processing operations lawfully implemented prior to May 25, 2018 (e.g., processing operations registered with the CNIL, exempt from registration or recorded in the register held by the DPO under the previous regime) do not require a DPIA within a period of 3 years from May 25, 2018, unless there has been a substantial change in the processing since its implementation.
  • Conditions in which a DPIA is to be carried out. The Guidelines state that DPIAs should be reviewed regularly—at minimum, every three years—to ensure that the level of risk to individuals’ rights and freedoms remains acceptable. This corresponds to the three-year period mentioned in the draft guidelines on DPIAs adopted by the Article 29 Working Party on April 4, 2017.
  • Situations in which a DPIA must be provided to the CNIL. The Guidelines specify that data controllers may rely on the CNIL’s sectoral guidelines (“Referentials”) to determine whether the CNIL must be consulted. If the data processing complies with a Referential, the data controller may take the position that there is no high residual risk and no need to seek prior consultation for the processing from the CNIL. If the data processing does not fully comply with the Referential, the data controller should assess the level of residual risk and the need to consult the CNIL. The Guidelines note that the CNIL may request DPIAs in case of inspections.

CNIL’s List of Processing Operations Requiring a DPIA

The CNIL previously submitted a draft list of processing operations requiring a DPIA to the EDPB for its opinion. The CNIL adopted its final list on October 11, 2018, based on that opinion. The final list includes 14 types of processing operations for which a DPIA is mandatory. The CNIL provided concrete examples for each type of processing operation, including:

  • processing operations for the purpose of systematically monitoring the employees’ activities, such as the implementation of data loss prevention tools, CCTV systems recording employees handling money, CCTV systems recording a warehouse stocking valuable items in which handlers are working, digital tachograph installed in road freight transport vehicles, etc.;
  • processing operations for the purpose of reporting professional concerns, such as the implementation of a whistleblowing hotline;
    processing operations involving profiling of individuals that may lead to their exclusion from the benefit of a contract or to the contract suspension or termination, such as processing to combat fraud of (non-cash) means of payment;
  • profiling that involves data coming from external sources, such as a combination of data operated by data brokers and processing to customize online ads;
  • processing of location data on a large scale, such as a mobile app that enables to collect users’ geolocation data, etc.

The CNIL’s list is non-exhaustive and may be regularly reviewed, depending on the CNIL’s assessment of the “high risks” posed by certain processing operations.

Next steps

The CNIL is expected to soon publish its list of processing operations for which a DPIA is not required.

Yahoo! Agrees to Settle Data Breach Class Actions with $50 Million Fund and Credit Monitoring

On October 23, 2018, the parties in the Yahoo! Inc. (“Yahoo!”) Customer Data Security Breach Litigation pending in the Northern District of California and the parties in the related litigation pending in California state court filed a motion seeking preliminary approval of a settlement related to breaches of the company’s data. These breaches were announced from September 2016 to October 2017 and collectively impacted approximately 3 billion user accounts worldwide. In June 2017, Yahoo! and Verizon Communications Inc. had completed an asset sale transaction, pursuant to which Yahoo! became Altaba Inc. (“Altaba”) and Yahoo!’s previously operating business became Oath Holdings Inc. (“Oath”). Altaba and Oath have each agreed to be responsible for 50 percent of the settlement.

Under the terms of the agreement, Yahoo!, through its successor in interest, Oath Holdings Inc., has agreed to enhance its business practices to improve the security of its users’ personal information stored on its databases. Yahoo! will also pay for a minimum of two years of credit monitoring services to protect settlement class members from future harm, as well as establish a $50 million settlement fund to provide an alternative cash payment for those who verify they already have credit monitoring or identity protection. The settlement fund will also cover demonstrated out-of-pocket losses, including loss of time, and payments to Yahoo! users who paid for advertisement-free or premium Yahoo! Mail services and those who paid for Aabaco Small Business services, which included business email services. The motion for approval is currently before the court, which has scheduled a hearing for November 29, 2018, on the matter.

Draft Bill Imposes Steep Penalties, Expands FTC’s Authority to Regulate Privacy

On November 1, 2018, Senator Ron Wyden (D-Ore.) released a draft bill, the Consumer Data Protection Act, that seeks to “empower consumers to control their personal information.” The draft bill imposes heavy penalties on organizations and their executives, and would require senior executives of companies with more than one billion dollars per year of revenue or data on more than 50 million consumers to file annual data reports with the Federal Trade Commission. The draft bill would subject senior company executives to imprisonment for up to 20 years or fines up to $5 million, or both, for certifying false statements on an annual data report. Additionally, like the EU General Data Protection Regulation, the draft bill proposes a maximum fine of 4% of total annual gross revenue for companies that are found to be in violation of Section 5 of the FTC Act.

The draft bill also proposes to grant the FTC authority to write and enforce privacy regulations, to establish minimum privacy and cybersecurity standards, and to create a national “Do Not Track” system that would allow consumers to prevent third-party companies from tracking internet users by sharing or selling data and targeting advertisements based on their personal information.

Senator Wyden stated, “My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules.”

Information security: How Hackers Leverage Stolen Data for Profit

Data theft is inarguably big business for hackers. This has been proven time and time again when big-name companies and their customers are involved in a data breach. As these instances appear to take place more often, and the number of stolen or compromised files continues to rise, it’s worth looking into exactly what hackers do with this information after they’ve put so much effort into stealing it.

While some data breaches involve low-hanging fruit – including default passwords and other sub-standard data protection measures – other attacks include increasingly sophisticated cybercriminal activity, backed by in-depth social engineering and research into potential targets. Thanks to these efforts, more than 2.6 billion records were stolen or compromised in 2017, a staggering 88 percent rise from the amount of data hackers made off with in 2016, according to Information Age.

But what takes place after a successful breach and data exfiltration? With all of this information in hand, where do hackers turn next to generate a profit?

Type of data dictates price, post-theft malicious activity

As Trend Micro research shows, the process that stolen data goes through after the initial breach depends largely upon the type of data and from what industry it was stolen.

Personally identifiable information (PII) can include a whole host of different elements and is stored by many brands to support customer accounts and personalization. Researchers discovered that once hackers bring this information to underground markets, it can be used to support identity fraud, the creation of counterfeit accounts, illicit money transfers, the launch of spam and phishing attacks, and even blackmail, extortion or hacktivism.

Let’s take a look at the ways in which other types of stolen data can be used once hackers gather it and bring it to underground marketplaces:

  • Financial data, including information tied to banking, billing and insurance activities, can be used for identity fraud, including fake tax returns and loan applications, to establish counterfeit payment cards, billing accounts or money transfers, and for blackmail or extortion. With the right details, hackers can even withdraw money directly from victims’ bank accounts.
  • Health care details, spanning hospital records, medical or insurance information and even data from medical wearables and other devices, can be sold or used to support fraudulent insurance claims, or for the fraudulent purchase of prescription drugs.
  • Payment card information, such as the card owner’s name, card number and expiration date can be used for fraudulent online purchases. As Trend Micro experts noted, when data of this kind is stolen and sold within underground hacker marketplaces, it can be even more dangerous to an individual’s identity than stolen financial data. The potential for negative impacts can be much greater with fraudulently used payment card information, particularly when that data is tied to a user’s credit card.
  • Account credentials, including the usernames and passwords, can be leveraged by hackers for fraudulent insurance claims, to buy prescriptions, to launch spam or phishing attacks, as well as for extortion or hacktivism, depending upon the account that is hacked.
  • Education information, encompassing items like students transcripts, other school records and enrollment data, can be used for identity fraud and fake student loan applications, as well as for blackmail or extortion.

One theft leads to another

A main motivation of hackers is to make off with as much stolen information as possible. This thought process is applied not only to data breaches of specific companies, but also of the data belonging to individual users as well.

“More than 2.6 billion records were stolen or compromised in 2017.”

Take stolen account credentials, for example. A hacker will often leverage a stolen username and password to support further malicious activity and data theft in the hopes of compromising even more personal information.

“Theft of user credentials might even be more dangerous than PII, as it essentially exposes the victim’s online accounts to potential malicious use,” Trend Micro researches pointed out. “Email is often used to verify credentials and store information from other accounts, and a compromised email account can lead to further instances of fraud and identity theft.”

In such instances, a hacker can utilize stolen account credentials to fraudulently access an individual’s email. This may provide the cybercriminal with an email that includes a credit card invoice, giving them even more information for theft, and even the potential to steal, use or sell the victim’s credit card details for further fraud.

What’s more, as Trend Micro researchers noted, certain types of data are often interrelated, and the theft of one set of data often means the compromise of another, connected set. With health care files, for instance, a health care provider may store not only a patient’s medical history, but also their payment information as well. In this way, a breach of the provider could result not only in the exposure of medical details, but patient financial information as well.

What is data worth on underground marketplaces?

As Trend Micro’s interactive infographic shows, there are several different underground marketplaces existing all over the world, and the amount of profit hackers are able to generate depends on where they sell stolen information and the type of details their haul includes.

Experian data fro 2018 shows how profits for certain types of data can quickly add up for hackers, including for assets like:

  • Online payment account credentials, worth up to $200
  • Credit or debit card information, worth up to $110
  • Diplomas, worth up to $400
  • Medical records, worth up to $1,000
  • Passports, worth up to $2,000

Hackers also engage in data bundling, where individual pieces of stolen information are linked and packaged together, and then sold in a premium bundle for a higher price. These more complete, fraudulent profiles can include an array of information, including a victim’s name, age, address, birth date, Social Security number, and other similar information.

Working to prevent data theft

As the profit totals hackers can generate from stolen data continues to rise, it’s imperative that businesses and individual users alike take the proper precautions to safeguard their sensitive information.

This includes replacing default security measures with more robust protections, including strong passwords and multi-factor authentication, where applicable. Organizations should also limit access to especially sensitive information and databases to only those authorized users that need to utilize this data.

User education can also be a considerable advantage in better preventing information left. Users that are aware of current threats and know not to click on suspicious links or open emails from unknown senders can represent an additional layer of security against unauthorized access and cybercriminal activity.

To find out more about how to improve data prevention efforts within your organization, connect with the experts at Trend Micro today.

The post Information security: How Hackers Leverage Stolen Data for Profit appeared first on .

Joining Team Astalavista – Stay Tuned!

Dear blog readers I wanted to let everyone know that I will be shortly joining Team Astalavista - The World's Most Popular Information Security Portal acting a Managing Director following a successful career as Managing Director through 2003-2006 where I used to maintain a highly informative and educational Security Newsletter featuring exclusive content and security interviews (Security

Pay-Per-Exploit Acquisition Vulnerability Programs – Pros and cons?

As ZERODIUM starts paying premium rewards to security researchers to acquire their previously unreported zero-day exploits affecting multiple operating systems software and/or devices a logical question emerges in the context of the program's usefulness the potential benefits including potential vulnerabilities within the actual acquisition process - how would the program undermine the

Historical OSINT – Massive Blackhat SEO Campaign Spotted in the Wild Serves Scareware

It's 2010 and I've recently stumbled upon a currently active and circulating malicious and fraudulent blackhat SEO campaign successfully enticing hundreds of thousands globally into interacting with a multi-tude of rogue and malicious software also known as scareware. In this post I'll profile the campaign discuss in-depth the tactics techniques and procedures of the cybercriminals behind it and

Historical OSINT – A Diversified Portfolio of Fake Security Software Spotted in the Wild

It's 2010 and I've recently stumbled upon yet another malicious and fraudulent domain portfolio serving a variety of fake security software also known as scareware potentially exposing hundreds of thousands of users to a variety of fake security software with the cybercriminals behind the campaign potentially earning fraudulent revenue largely relying on the utilization of an affiliate-network

Historical OSINT – A Diversified Portfolio of Fake Security Software

It's 2010 and I've recently stumbled upon a currently active and circulating malicious and fraudulent porfolio of fake security software also known as scareware potentially enticing hundreds of thousands of users to a multi-tude of malicious software with the cybercriminals behind the campaign potentially earning fraudulent revenue in the process of monetizing access to malware-infected hosts

Historical OSINT – Massive Blackhat SEO Campaign Spotted in the Wild Drops Scareware

It's 2008 and I've recently stumbled upon a currently active malicious and fraudulent blackhat SEO campaign successfully enticing users into falling victim into fake security software also known as scareware including a variety of dropped fake codecs largely relying on the acquisition of legitimate traffic through active blackhat SEO campaigns in this particular case various North Korea news

Historical OSINT – Spamvertized Swine Flu Domains – Part Two

It's 2010 and I've recently came across to a currently active diverse portfolio of Swine Flu related domains further enticing users into interacting with rogue and malicious content. In this post I'll profile and expose a currently active malicious domains portfolio currently circulating in the wild successfully involved in an ongoing variety of Swine Flu malicious spam campaigns and will

Historical OSINT – Yet Another Massive Blackhat SEO Campaign Spotted in the Wild Drops Scareware

It's 2010 and I've recently came across to a currently active malicious and fraudulent blackhat SEO campaign successfully enticing users into interacting with rogue and fraudulent scareware-serving malicious and fraudulent campaigns. In this post I'll provide actionable intelligence on the infrastructure behind the campaign. Related malicious domains known to have participated in the campaign:

Historical OSINT – Yet Another Massive Blackhat SEO Campaign Spotted in the Wild

It's 2010 and I've recently stumbled upon yet another diverse portfolio of blackhat SEO domains this time serving rogue security software also known as scareware to unsuspecting users with the cybercriminals behind the campaign successfully earning fraudulent revenue in the process of monetizing access to malware-infected hosts largely relying on the utilization of an affiliate-network based type

Historical OSINT – Profiling a Portfolio of Active 419-Themed Scams

It's 2010 and I've recently decided to provide actionable intelligence on a variety of 419-themed scams in particular the actual malicious actors behind the campaigns with the idea to empower law enforcement and the community with the necessary data to track down and prosecute the malicious actors behind these campaigns. Related malicious and fraudulent emails known to have participated in the

Historical OSINT – Rogue Scareware Dropping Campaign Spotted in the Wild Courtesy of the Koobface Gang

It's 2010 and I've recently came across to a diverse portfolio of fake security software also known as scareware courtesy of the Koobface gang in what appears to be a direct connection between the gang's activities and the Russian Business Network. In this post I'll provide actionable intelligence on the infrastructure behind it and discuss in-depth the tactics techniques and procedures of the

Historical OSINT – Massive Blackhat SEO Campaign Spotted in the Wild – Part Two

It's 2008 and I've recently came across to a massive black hat SEO campaign successfully enticing users into falling victim into fraudulent and malicious scareware-serving campaign. In this post I'll provide actionable intelligence on the infrastructure behind it. Related malicious domains and redirectors known to have participated in the campaign: hxxp://msh-co.com hxxp://incubatedesign.com

Historical OSINT – Massive Blackhat SEO Campaign Spotted in the Wild

It's 2008 and I recently came across to a pretty decent portfolio of rogue and fraudulent malicious scareware-serving domains successfully acquiring traffic through a variety of black hat SEO techniques in this particular case the airplane crash of the Polish president. Related malicious domains known to have participated in the campaign: hxxp://sarahscandies.com hxxp://armadasur.com hxxp://

Historical OSINT – Malware Domains Impersonating Google

It''s 2008 and I've recently stumbled upon a currently active typosquatted portfolio of malware-serving domains successfully impersonating Google further spreading malicious software to hundreds of thousands of unsuspecting users. In this post I'll provide actionable intelligence on the infrastructure behind the campaign. Related malicious domains known to have participated in the campaign:

Historical OSINT – Massive Scareware Dropping Campaign Spotted in the Wild

It's 2008 and I've recently spotted a currently circulating malicious and fraudulent scareware-serving malicious domain portfolio which I'll expose in this post with the idea to share actionable threat intelligence with the security community further exposing and undermining the cybercrime ecosystem the way we know it potentially empowering security researchers and third-party vendors with the

HIstorical OSINT – Latvian ISPs, Scareware, and the Koobface Gang Connection

It's 2010 and we've recently stumbled upon yet another malicious and fraudulent campaign courtesy of the Koobface gang actively serving fake security software also known as scareware to a variety of users with the majority of malicious software conveniently parked within 79.135.152.101 - AS2588, LatnetServiss-AS LATNET ISP successfully hosting a diverse portfolio of fake security software. In

Historical OSINT – Massive Blackhat SEO Campaign Courtesy of the Koobface Gang Spotted in the Wild

It's 2010 and I've recently stumbled upon yet another massive blackhat SEO campaign courtesy of the Koobface gang successfully exposing hundreds of thousands of users to a multi-tude of malicious software. In this post I'll provide actionable intelligence on the infrastructure behind it and discuss in the depth the tactics techniques and procedures of the cybercriminals behind it. Sample

HIstorical OSINT – PhishTube Twitter Broadcast Impersonated Scareware Serving Twitter Accounts Circulating

It's 2010 and I've recently intercepted a currently circulating malicious and fraudulent malware-serving spam campaign successfully enticing hundreds of thousands of users globally into interacting with the rogue and malicious software found on the compromised hosts in combination with a currently active Twitter malware-serving campaign successfully enticing users into interacting with the rogue 

Historical OSINT – Chinese Government Sites Serving Malware

It's 2008 and I'm stumbling upon yet another decent portfolio of compromised malware-serving Chinese government Web sites. In this post I'll discuss in-depth the campaign and provide actionable intelligence on the infrastructure behind it. Compromised Chinese government Web site: hxxp://nynews.gov.cn Sample malicious domains known to have participated in the campaign: hxxp://game1983.com/

Historical OSINT – Calling Zeus Home

Remember ZeuS? The infamous crimeware-in-the-middle exploitation kit? In this post I'll provide historical OSINT on various ZeuS-themed malicious and fraudulent campaigns intercepted throughout 2008 and provide actionable intelligence on the infrastructure behind the campaign. Related malicious domains known to have participated in the campaign: hxxp://myxaxa.com/z/cfg.bin hxxp://dokymentu.info/

Historical OSINT – A Diverse Portfolio of Fake Security Software

In this post I'll profile a currently circulating circa 2008 malicious and fraudulent scareware-serving campaign successfully enticing users into interacting with rogue and fraudulent fake security software with the cybercriminals behind the campaign successfully earning fraudulent revenue in the process of monetizing access to malware-infected hosts largely relying on the utilization of an

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Click to read the full article: Improve Security by Thinking Beyond the Security Realm

The Language and Nature of Fileless Attacks Over Time

The language of cybersecurity evolves in step with changes in attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware—which is one of my passions—but also because of its knack for agitating many security practitioners.

I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that such malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”

Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. Today, when I look at the ways in which malware bypasses detection, the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.

Fileless was synonymous with in-memory until around 2014.

The adversary’s challenge with purely in-memory malware is that disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data further explained that Poweliks infected systems by using a boobietrapped Microsoft Word document.

The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows, such as the registry, to maintain persistence.

However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods fileless malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations about our ability to defend against its underlying tactics.

Here’s my perspective on the methods that comprise modern fileless attacks:

  • Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
  • Malicious Scripts: They can interact with the OS without the restrictions that some applications, such as web browsers, might impose. Scripts are harder for anti-malware tools to detect and control than compiled executables. In addition, they offer a opportunity to split malicious logic across several processes.
  • Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. These tools allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
  • Malicious Code in Memory: Memory injection abuses features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap their malware into scripts, documents or other executables, extracting payload into memory during runtime.

While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection includes at least some fileless capabilities. Such techniques allow adversaries to operate in the periphery of anti-malware software. The success of such attack methods is the reason for the continued use of term fileless in discussions among cybersecurity professionals.

Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss evasive threats that avoided the file system and misused OS features. For a deeper dive into this topic, read the following three articles upon which I based this overview:

CVE Funding and Process

I had not seen this interesting letter (August 27, 2018) from the House Energy and Commerce Committee to DHS about the nature of funding and support for the CVE.

This is the sort of thoughtful work that we hope and expect government departments do, and kudos to everyone involved in thinking about how CVE should be nurtured and maintained.

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 ...
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security

Social-Engineer Newsletter Vol 08 – Issue 108

 

Vol 08 Issue 108
September 2018

In This Issue

  • Information Security, How Well is it Being Used to Protect Our Children at School?
  • Social-Engineer News
  • Upcoming classes

As a member of the newsletter you have the option to OPT-IN for special offers. You can click here to do that.


Check out the schedule of upcoming training on Social-Engineer.com

3-4 October, 2018 Advanced Open Source Intelligence for Social Engineers – Louisville, KY (SOLD OUT)

If you want to ensure your spot on the list register now – Classes are filling up fast and early!


The SEVillage at Def Con 26 would not have been possible without it’s amazing Sponsors!

Thank you to our Sponsor for SEVillage at DerbyCon 8.0!


Do you like FREE Stuff?

How about the first chapter of ALL OF Chris Hadnagy’s Best Selling Books

If you do, you can register to get the first chapter completely free just go over to http://www.social-engineer.com to download now!


To contribute your ideas or writing send an email to contribute@social-engineer.org


If you want to listen to our past podcasts hit up our Podcasts Page and download the latest episodes.


Our good friends at CSI Tech just put their RAM ANALYSIS COURSE ONLINE – FINALLY.

The course is designed for Hi-Tech Crime Units and other digital investigators who want to leverage RAM to acquire evidence or intelligence which may be difficult or even impossible to acquire from disk. The course does not focus on the complex structures and technology behind how RAM works but rather how an investigator can extract what they need for an investigation quickly and simply

Interested in this course? Enter the code SEORG and get an amazing 15% off!
http://www.csitech.co.uk/training/online-ram-analysis-for-investigators/

You can also pre-order, CSI Tech CEO, Nick Furneaux’s new book, Investigating Cryptocurrencies: Understanding, Extracting, and Analyzing Blockchain Evidence now!


The team at Social-Engineer, LLC proudly uses:


A Special Thanks to:

The EFF for supporting freedom of speech

Keep Up With Us

Friend on Facebook Facebook
Follow on Twitter Twitter

Information Security, How Well is it Being Used to Protect Our Children at School?

Information Security, How Well is it Being Used to Protect Our Children at School?

August and September are ordinary months to some, but to others they are a time of mixed emotions. It’s the start of another school year. Some are sad to see their children off, while others celebrate that day. The start of the school year brings with it a lot of paperwork and sharing of sensitive information. How well is information security being used to protect our children’s information, and even the school staff’s, personally identifiable information (PII)? How well is it being used to protect against social engineering attacks?

Think about the information that the schools keep; when you registered your child, you may have had to give them copies of their birth certificate, social security number, your phone number, and other personal information. You may have had to give your own social security number, especially if you had to fill out an application for free and reduced-price meals, or you had to register to volunteer at the school. If your child is in a college or university, even more information has to be given such as financial records, medical records, and high school transcripts. What is being done to keep that information secure?

When I read the following headlines they make me a little concerned, how about you?

These are only a few of the many stories out there. According to the Breach Level Index by Gemalto, the education sector had 33.4 million records breached in 2017 and a total of 199 reported breaches. This is a 20% increase of reported incidents over 2016. It gives meaning as to how widespread the incidents are when I see it visually on the K-12 Cyber Incident Map by the K-12 Cybersecurity Resource Center.

Who are breaching school networks and why are they doing it

Who are trying to breach a school’s network? It’s not just the student doing it to change grades or for fun, it’s also the elite attacker and the common cybercriminal. Thanks to the ease of availability of hacking tools, and the sharing of malicious attack techniques on the dark web, they are able to install ransomware, encrypt drives, and demand payment to decrypt them. They are also able to exfiltrate PII and passwords to gain further access to networks and steal and create identities. Identity thieves will use the child’s information to create their own false identity where they can take out credit cards and loans, ruining your child’s credit. When this happens, it can make it difficult to get a license, go to college, or get any loans.

How are they doing it?

Cybercriminals are opportunists who will take advantage of any vulnerabilities, especially with organizations that are less secure. Unfortunately for educational institutions, their security stance is usually poor and at a high risk. They battle staffing and budgetary constraints, their view of cybersecurity has been one of a low priority, and they view security as an inconvenience.

Another point of weakness is the ease of accessibility to the school’s network. They usually have free Wi-fi, large numbers of desktop and mobile devices, and weak passwords which all present potential points of entry into the network. In addition, students will browse the web from insecure networks and often pick up malware which can then be inadvertently shared with others via email or uploads of coursework to the secure school network.

So, what do cybercriminals do? They use a variety of web- and email-based attacks that are at their disposal. One web-based attack is that they actively target sites where students will commonly browse. These are often completely legitimate sites, such as Thesaurus.com. No click required; just viewing the ad can initiate the malware download.

An example of an email-based (phishing) attack targeting education was at Northeastern University, where some Blackboard Learning users were targeted by an email that tried to influence the reader into clicking a link that was disguised to be legitimate and tried to compel the action by using a time constraint.

With web- and email-based attacks, the cybercriminal can deliver ransomware and steal student records. All at a great cost to the school system and to those that have their information compromised.

What can be done?

When it comes to protecting our children we are willing to do anything, so what can we do to protect our children’s information?

Here are some things that parents can do:

1. Make sure that the personal computer that is used to log into the school’s network is up-to-date;

2. Make sure that computer has more than just an antivirus installed, add malware protection as well;

3. Be proactive and educate yourself and your children on security awareness;

  • Read the Social Engineer Framework;
  • Have your child create usernames that don’t contain personal information, such as birth year;
  • Look at using a private VPN when on an insecure network, such as at Starbucks. Trustworthy VPNs will usually have a fee for using them;
  • Teach children the importance of not giving out information;
  • Use a secure password manager and don’t share passwords;
  • Make sure teens don’t take a picture of their license and share it on social media; and
  • Don’t throw important documents in the trash, shred them.

4. Be watchful of your student’s browsing activity; and

5. Something you may wish to look into is an identity theft protection service to protect your child against identity theft.

Remember that just because you are asked to give out information doesn’t mean you have to. Ask, “why is it necessary for them to have that information?”

Schools need to follow the industry best practice in information security and we, as parents, need to demand that it be done. Schools should also be forced to address the human element in security:

  • Staff, teachers, students, and parents need to be educated and used as a line of defense; and
  • Institute security awareness training which includes: Performing simulated phishing exercises; Recruiting on-campus security advocates; and Holding onsite security education activities, lectures, and in-class training.

Following these suggestions will help to protect our children’s information at school.

Need Inspiration?

If you want some inspiration, look at what some schools are doing:

  • One example is that the July 2017 article of The Educator in San Diego, CA said that, “the local ESET office runs an annual cyber boot camp for about 50 middle and high school students.”
  • Another example was in the June 2017 article of The Educator, where it discusses how the Macquarie University in Australia uses the BlackBerry AtHoc as part of the University’s Emergency Management Plan and that the system will assist the school in managing and mitigating social engineering incidents, for example, by sending a message to staff and students recommending not to open a certain email or click on a certain link.

To some, the suggestions may be easier said than done, but, if they aren’t followed, the school nearest you may be the next cybersecurity incident we read about. Information security must be implemented to protect the sensitive information (PII) that is housed at the schools, especially that of protecting our children’s information.

Stay safe and secure.

Written By: Mike Hadnagy

Sources:

https://www.theeducatoronline.com/au/news/is-your-school-protected-against-cyber-threats/237855

https://www.theeducatoronline.com/au/technology/infrastructure-and-equipment/how-malware-could-be-threatening-your-school/246146

https://edtechmagazine.com/k12/article/2016/04/how-ever-worsening-malware-attacks-threaten-student-data

https://blogs.cisco.com/education/the-surprisingly-high-cost-of-malware-in-schools-and-how-to-stop-it

https://blog.barkly.com/school-district-malware-cyber-attacks

https://in.pcmag.com/asus-zenpad-s-80-z580ca/124559/news/facebook-serves-up-internet-101-lessons-for-kids

https://www.stuff.co.nz/business/105950814/schools-promised-better-protection-from-ransomware-as-taranaki-school-blackmailed

https://www.eset.com/int/about/why-eset/

As part of the newsletter group, you will be the first to receive special offers to services and products by Social-Engineer.Com.


 

 

The post Social-Engineer Newsletter Vol 08 – Issue 108 appeared first on Security Through Education.

Making Sense of Microsoft’s Endpoint Security Strategy

Microsoft is no longer content to simply delegate endpoint security on Windows to other software vendors. The company has released, fine-tuned or rebranded  multiple security technologies in a way that will have lasting effects on the industry and Windows users. What is Microsoft’s endpoint security strategy and how is it evolving?

Microsoft offers numerous endpoint security technologies, most of which include “Windows Defender” in their name. Some resemble built-in OS features (e.g., Windows Defender SmartScreen), others are free add-ons (e.g., Windows Defender Antivirus), while some are commercial enterprise products (e.g., the EDR component of Windows Defender Advanced Threat Protection). I created a table that explains the nature and dependencies of these capabilities in a single place. Microsoft is in the process of unifying these technologies under the Windows Defender Advanced Threat Protection branding umbrella—the name that originally referred solely to the company’s commercial incident detection and investigation product.

Microsoft’s approach to endpoint security appears to be pursuing the following 3 objectives:

  • Motivate other vendors to innovate beyond the commodity security controls that Microsoft offers for its modern OS versions. Windows Defender Antivirus and Windows Defender Firewall with Advanced Security (WFAS) on Windows 10 are examples of such tech. Microsoft has been expanding these essential capabilities to be on par with similar features of commercial products. This not only gives Microsoft control over the security posture of its OS, but also forces other vendors to tackle the more advanced problems on the basis of specialized expertise or other strategic abilities.
  • Expand the revenue stream from enterprise customers. To centrally manage Microsoft’s endpoint security layers, organizations will likely need to purchase System Center Configuration Manager (SCCM) or Microsoft Intune. Obtaining some Microsoft’s security technologies, such as the EDR component of Windows Defender Advanced Threat Protection, requires upgrading to the high-end Windows Enterprise E5 license. By bundling such commercial offerings with other products, rather than making them available in a standalone manner, the company motivates customers to shift all aspects of their IT management to Microsoft.

In pursuing these objectives, Microsoft developed the building blocks that are starting to resemble features of commercial Endpoint Protection Platform (EPP) products. The resulting solution is far from perfect, at least at the moment:

  • Centrally managing and overseeing these components is difficult for companies that haven’t fully embraced Microsoft for all their IT needs or that lack expertise in technologies such as Group Policy.
  • Making sense of the security capabilities, interdependencies and licensing requirements is challenging, frustrating and time-consuming.
  • Most of the endpoint security capabilities worth considering are only available for the latest versions of Windows 10 or Windows Server 2016. Some have hardware dependencies are incompatible with older hardware.
  • Several capabilities have dependencies that are incompatible with other products.  For instance, security features that rely on Hyper-V prevent users from using the VMware hypervisor on the endpoint.
  • Some technologies are still too immature or impractical for real-world deployments. For example, using my Windows 10 system after enabling the Controlled folder access feature became unbearable after a few days.
  • The layers fit together in an awkward manner at times. For instance, Microsoft provides two app whitelisting technologies—Windows Defender Application Control (WDAC) and AppLocker—that overlap in some functionality.

While infringing on the territory traditionally dominated by third-parties on the endpoint, Microsoft leaves room for security vendors to provide value and work together with Microsoft’s security technologies. For example:

Some of Microsoft’s endpoint security technologies still feel disjointed. They’re becoming less so, as the company fine-tunes its approach to security and matures its capabilities. Microsoft is steadily guiding enterprises towards embracing Microsoft as the de facto provider of IT products. Though not all enterprises will embrace an all-Microsoft vision for IT, many will. Endpoint security vendors will need to crystallize their role in the resulting ecosystem, expanding and clarifying their unique value proposition. (Coincidentally, that’s what I’m doing at Minerva Labs, where I run product management.)

Retired Malware Samples: Everything Old is New Again

I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute.  Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.

A Backdoor with a Backdoor

To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.

Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:

“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”

Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server irc.slim.org.au that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.

Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.

You Are an Idiot

The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:

When the visitor attempted to navigate away from the offending site, its JavaScript popped up new instances of the page, making it very difficult to leave. Moreover, each instance of the page played the following jingle on the victim’s speakers. “You are an idiot,” the song exclaimed. “Ahahahahaha-hahahaha!” The cacophony of multiple windows blasting this jingle was overwhelming.

 

A while later I came across a network worm that played this sound file on victims’ computers, though I cannot find that sample anymore. While writing this post, I was surprised to discover a version of this page, sans the multi-window JavaScript trap, residing on www.youareanidiot.org. Maybe it’s true what they say: good joke never gets old.

Clipboard Manipulation

When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:

At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.

The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.

As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course.  It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.

Scammers Use Breached Personal Details to Persuade Victims

Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.

Personalized Porn Extortion Scam

Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:

“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?

I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”

The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.

The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.

Data Breach Lawsuit Scam

In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:

“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”

The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.

What to Do?

If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.

Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.

To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:

 

Cyber is Cyber is Cyber

If you’re in the business of safeguarding data and the systems that process it, what do you call your profession? Are you in cybersecurity? Information security? Computer security, perhaps? The words we use, and the way in which the meaning we assign to them evolves, reflects the reality behind our language. If we examine the factors that influence our desire to use one security title over the other, we’ll better understand the nature of the industry and its driving forces.

Until recently, I’ve had no doubts about describing my calling as an information security professional. Yet, the term cybersecurity is growing in popularity. This might be because the industry continues to embrace the lexicon used in government and military circles, where cyber reigns supreme. Moreover, this is due to the familiarity with the word cyber among non-experts.

When I asked on Twitter about people’s opinions on these terms, I received several responses, including the following:

  • Danny Akacki was surprised to discover, after some research, that the origin of cyber goes deeper than the marketing buzzword that many industry professionals believe it to be.
  • Paul Melson and Loren Dealy Mahler viewed cybersecurity as a subset of information security. Loren suggested that cyber focuses on technology, while Paul considered cyber as a set of practices related to interfacing with adversaries.
  • Maggie O’Reilly mentioned Gartner’s model that, in contrast, used cybersecurity as the overarching discipline that encompasses information security and other components.
  • Rik Ferguson also advocated for cybersecurity over information security, viewing cyber as a term that encompasses muliple components: people, systems, as well as information.
  • Jessica Barker explained that “people outside of our industry relate more to cyber,” proposing that if we want them to engage with us, “we would benefit from embracing the term.”

In line with Danny’s initial negative reaction to the word cyber, I’ve perceived cybersecurity as a term associated with heavy-handed marketing practices. Also, like Paul, Loren, Maggie and Rik, I have a sense that cybersecurity and information security are interrelated and somehow overlap. Jessica’s point regarding laypersons relating to cyber piqued my interest and, ultimately, changed my opinion of this term.

There is a way to dig into cybersecurity and information security to define them as distinct terms. For instance, NIST defines cybersecurity as:

“Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”

Compare that description to NIST’s definition of information security:

“The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.”

From NIST’s perspective, cybersecurity is about safeguarding electronic communications, while information security is about protecting information in all forms. This implies that, at least according to NIST, information security is a subset of cybersecurity. While this nuance might be important in some contexts, such as regulations, the distinction probably won’t remain relevant for long, because of the points Jessica Barker raised.

Jessica’s insightful post on the topic highlights the need for security professionals to use language that our non-specialist stakeholders and people at large understand. She outlines a brief history that lends credence to the word cyber. She also explains that while most practitioners seem to prefer information security, this term is least understood by the public, where cybersecurity is much more popular. She explains that:

“The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.”

Technology and the role it plays in our lives continues to change. Our language evolves with it. I’m convinced that the distinction between cybersecurity and information security will soon become purely academic and ultimately irrelevant even among industry insiders. If the world has embraced cyber, security professionals will end up doing so as well. While I’m unlikely to wean myself off information security right away, I’m starting to gradually transition toward cybersecurity.

Communicating About Cybersecurity in Plain English

When cybersecurity professionals communicate with regular, non-technical people about IT and security, they often use language that virtually guarantees that the message will be ignored or misunderstood. This is often a problem for information security and privacy policies, which are written by subject-matter experts for people who lack the expertise. If you’re creating security documents, take extra care to avoid jargon, wordiness and other issues that plague technical texts.

To strengthen your ability to communicate geeky concepts in plain English, consider the following exercise: Take a boring paragraph from a security assessment report or an information security policy and translate it into a sentence that’s no longer than 15 words without using industry terminology. I’m not suggesting that the resulting statement should replace the original text; instead, I suspect this exercise will train you to write more plainly and succinctly.

For example, I extracted and slightly modified a few paragraphs from the Princeton University Information Security Policy, just so that I could experiment with some public document written in legalese. I then attempted to relay the idea behind each paragraph in the form of a 3-line haiku (5-7-5 syllables per line):

This Policy applies to all Company employees, contractors and other entities acting on behalf of Company. This policy also applies to other individuals and entities granted use of Company information, including, but not limited to, contractors, temporary employees, and volunteers.

If you can read this,
you must follow the rules that
are explained below.

When disclosing Confidential information, the proposed recipient must agree (i) to take appropriate measures to safeguard the confidentiality of the information; (ii) not to disclose the information to any other party for any purpose absent the Company’s prior written consent.

Don’t share without a
contract any information
that’s confidential.

All entities granted use of Company Information are expected to: (i) understand the information classification levels defined in the Information Security Policy; (ii) access information only as needed to meet legitimate business needs.

Know your duties for
safeguarding company info.
Use it properly.

By challenging yourself to shorten a complex concept into a single sentence, you motivate yourself to determine the most important aspect of the text, so you can better communicate it to others. This approach might be especially useful for fine-tuning executive summaries, which often warrant careful attention and wordsmithing. This is just one of the ways in which you can improve your writing skills with deliberate practice.

Security Product Management at Large Companies vs. Startups

Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.

The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.

Product Management at a Large Firm

In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.

Access to Customers

Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.

Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.

Access to Expertise

Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.

Organizational Structure

Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)

Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.

What It’s Like at a Large Firm

In a nutshell, these are the characteristics inherent to product management roles at large companies:

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!

I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.

Product Management in a Startup

One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.

What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.

Experimenting With Capabilities

Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.

Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.

Fluid Responsibilities

In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.

People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)

Customer Reach

Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.

However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.

What It’s Like at a Startup

Here are the key aspects of performing product management at a startup:

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.

There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.

Which Setting is Best for You?

Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.

Information Security and the Zero-Sum Game

A zero-sum game is a mathematical representation of a situation in which each participant’s gain or loss is exactly balanced by the losses or gains of the other participant. In Information Security a zero-sum game usually references the trade-off between being secure and having privacy. However, there is another zero-sum game often played with Information […]

Everything that is happening now has happened before

While looking through old notebooks, I found this piece that I wrote in 2014 for a book that never got published. Reading it through it surprised me how much we are still facing the same challenges today as we did four years ago. Security awareness and security training are no different… So, you have just … Read More

Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest … Read More

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.

Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."
Click to read the full article: New World, New Rules: Securing the Future State.

Practical Tips for Creating and Managing New Information Technology Products

This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Responsibilities of a Product Manager

  • Determine what to build, not how to build it.
  • Envision the future pertaining to product domain.
  • Align product roadmap to business strategy.
  • Define specifications for solution capabilities.
  • Prioritize feature requirements, defect correction, technical debt work and other development efforts.
  • Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
  • Participate in the handling of issue escalations.
  • Sometimes take on revenue or P&L responsibilities.

Defining Product Capabilities

  • Understand gaps in the existing products within the domain and how customers address them today.
  • Understand your firm’s strengths and weaknesses.
  • Research the strengths and weaknesses of your current and potential competitors.
  • Define the smallest set of requirements for the initial (or next) release (minimum viable product).
  • When defining product requirements, balance long-term strategic needs with short-term tactical ones.
  • Understand your solutions key benefits and unique value proposition.

Strategic Market Segmentation

  • Market segmentation often accounts for geography, customer size or industry verticals.
  • Devise a way of grouping customers based on the similarities and differences of their needs.
  • Also account for the similarities in your capabilities, such as channel reach or support abilities.
  • Determine which market segments you’re targeting.
  • Understand similarities and differences between the segments in terms of needs and business dynamics.
  • Consider how you’ll reach prospective customers in each market segment.

Engagement with the Sales Team

  • Understand the nature and size of the sales force aligned with your product.
  • Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
  • Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
  • Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
  • Assess what other products are “competing” for the sales team’s attention, if applicable.
  • Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
  • Gather sales’ negative and positive feedback regarding the product.
  • Understand which market segments and use-cases have gained the most traction in the product’s sales.

The Pricing Model

  • Understand the value that customers in various segments place on your product.
  • Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
  • Compute your ongoing costs related to maintaining the product and supporting its users.
  • Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
  • Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
  • Define the approach to offering volume pricing discounts, if applicable.
  • Define the model for compensating the sales team, including resellers, if applicable.
  • Establish the pricing schedule, setting the priced based on perceived value.
  • Account for the minimum desired profit margin.

Product Delivery and Operations

  • Understand the intricacies of deploying the solution.
  • Determine the effort required to operate, maintain and support the product on an ongoing basis.
  • Determine for the technical steps, personnel, tools, support requirements and the associated costs.
  • Document the expectations and channels of communication between you and the customer.
  • Establish the necessary vendor relationship for product delivery, if necessary.
  • Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
  • Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
  • Obtain the appropriate audits and certifications.

Product Management at Startups

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

Product Management at Large Firms

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

Post-Scriptum

Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at hybrid-analysis.com, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Tips for Reverse-Engineering Malicious Code

This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Overview of the Code Analysis Process

  1. Examine static properties of the Windows executable for initial assessment and triage.
  2. Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
  3. Perform automated and manual behavioral analysis to gather additional details.
  4. If relevant, supplement our understanding by using memory forensics techniques.
  5. Use a disassembler for static analysis to examine code that references risky strings and API calls.
  6. Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
  7. If appropriate, unpack the code and its artifacts.
  8. As your understanding of the code increases, add comments, labels; rename functions, variables.
  9. Progress to examine the code that references or depends upon the code you’ve already analyzed.
  10. Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.

Common 32-Bit Registers and Uses

EAX Addition, multiplication, function results
ECX Counter; used by LOOP and others
EBP Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)
ESP Points to the current “top” of the stack; changes via PUSH, POP, and others
EIP Instruction pointer; points to the next instruction; shellcode gets it via call/pop
EFLAGS Contains flags that store outcomes of computations (e.g., Zero and Carry flags)
FS F segment register; FS[0] points to SEH chain, FS[0x30] points to the PEB.

Common x86 Assembly Instructions

mov EAX,0xB8 Put the value 0xB8 in EAX.
push EAX Put EAX contents on the stack.
pop EAX Remove contents from top of the stack and put them in EAX .
lea EAX,[EBP-4] Put the address of variable EBP-4 in EAX.
call EAX Call the function whose address resides in the EAX register.
add esp,8 Increase ESP by 8 to shrink the stack by two 4-byte arguments.
sub esp,0x54 Shift ESP by 0x54 to make room on the stack for local variable(s).
xor EAX,EAX Set EAX contents to zero.
test EAX,EAX Check whether EAX contains zero, set the appropriate EFLAGS bits.
cmp EAX,0xB8 Compare EAX to 0xB8, set the appropriate EFLAGS bits.

Understanding 64-Bit Registers

  • EAX→RAX, ECX→RCX, EBX→RBX, ESP→RSP, EIP→RIP
  • Additional 64-bit registers are R8-R15.
  • RSP is often used to access stack arguments and local variables, instead of EBP.
  • |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
    ________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
    ________________________________________________|||||||||||||||| R8W (16 bits)
    ________________________________________________________|||||||| R8B (8 bits)

Passing Parameters to Functions

arg0 [EBP+8] on 32-bit, RCX on 64-bit
arg1 [EBP+0xC] on 32-bit, RDX on 64-bit
arg2 [EBP+0x10] on 32-bit, R8 on 64-bit
arg3 [EBP+14] on 32-bit, R9 on 64-bit

Decoding Conditional Jumps

JA / JG Jump if above/jump if greater.
JB / JL Jump if below/jump if less.
JE / JZ Jump if equal; same as jump if zero.
JNE / JNZ Jump if not equal; same as jump if not zero.
JGE/ JNL Jump if greater or equal; same as jump if not less.

Some Risky Windows API Calls

  • Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
  • Dynamic DLL loading: LoadLibrary, GetProcAddress
  • Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
  • Data stealing: GetClipboardData, GetWindowText
  • Keylogging: GetAsyncKeyState, SetWindowsHookEx
  • Embedded resources: FindResource, LockResource
  • Unpacking/self-injection: VirtualAlloc, VirtualProtect
  • Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
  • Execute a program: WinExec, ShellExecute, CreateProcess
  • Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile

Additional Code Analysis Tips

  • Be patient but persistent; focus on small, manageable code areas and expand from there.
  • Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
  • Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
  • If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
  • When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).

Post-Scriptum

Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Security is Not, and Should not be Treated as, a Special Flower

My normal Wednesday lunch yesterday was rudely interrupted by my adequate friend and reasonable security advocate Javvad calling me to ask my opinion on something. This in itself was surprising enough, but the fact that I immediately gave a strong and impassioned response told me this might be something I needed to explore further… The UK … Read More

A Glimpse at Petya Ransomware

Ransomware has become an increasingly serious threat. Cryptowall, TeslasCrypt and Locky are just some of the ransomware variants that infected large numbers of victims. Petya is the newest strain and the most devious among them.

Petya will not only encrypt files but it will make the system completely useless, leaving the victim no choice but to pay for the ransom, and it will encrypt filesystem’s Master File Table, which leaves the operating system unable to load. MFT is an essential file in NTFS file system. It contains every file record and directory on NTFS logical volume. Each record contains all the particulars that the operating system need to boot properly.

Like any other malware, Petya is widely distributed via a job application spear-phishing email that comes with a Dropbox link luring the victim by claiming the link contains self-extracting CV; in fact, it contains self-extracting executable that would later unleash its malicious behavior.

Petya dropper

Petya’s dropper

Petya's infection behavior

Petya’s infection behavior

 Petya ransomware has two infection stages. The first stage is MBR infection and encryption key generation, including the decryption code used in ransom messages. The second stage is MFT encryption.

First Stage of Encryption

First infection stage behavior

First infection stage behavior

An MBR infection is made through straightforward \\.\PhysicalDrive0 manipulation with the help of DeviceIOControl API. It first retrieves the physical location of the root drive \\.\c by sending IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code to the device driver.  Then it sends the extended disk partition info of \\.\PhysicalDrive0 through IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code.

GET_VOLUME_Data

The dropper will encrypt the original MBR using XOR opcode and 0x37 and save it for later use. It will also create 34 disk sectors containing 0x37. Right after the 34 sectors are Petya’s MFT infecting code. Located on Sector 56 is the original encrypted MBR.

Infected disk view

Infected disk view

Infected disk view

Infected disk view

Original Encrypted MBR

Original Encrypted MBR

After the MBR infection, it will intentionally crash the system by triggering NTRaiseHardError. This will trigger BSOD and the system will start, which will cause the machine to load using the infected MBR.

Code snippet triggering BSOD

Code snippet in triggering BSOD

BSOD

BSOD

Once we inspected the dumped image of the disk, we discovered it was showing a fake CHKDSK screen. We will also see the ransom message and ASCII skull art.

Dumped disk image

Dumped disk image

Second Infection Stage

The stage 2 infection code is written in 16-bit architecture, which uses BIOS interrupt calls.

Upon system boot up, it will load into memory Petya’s malicious code, which is located at sector 34. It will first determine if the system is already infected by checking the first byte at sector is 0x0. If not infected, it will display fake CHKDSK.

Fake CHKDSK

Fake CHKDSK

When someone sees the Figure 8, it means that the MFT table is already encrypted using salsa20 algorithm.

Figure 8

The victim will see this screen upon boot.

The victim will see this screen upon boot.

Ransom message and instructions

Ransom message and instructions

Petya Ransomware Page

The webpage for the victim to access their personal decryption key is protected against bots and contains information about when the Petya ransomware project was launched, warnings on what not to do when recovering files and an FAQ page. The page is surprisingly very user friendly and shows the days left before the ransom price will be doubled.

Ransom page captcha

Ransom page captcha

 Petya’s homepage

Petya’s homepage

It also contains news feeds, including different blogs and news from AV companies warning about Petya.

News 1 Figure 13

News 2

They also provide a step-by-step process on how to pay the ransom, including instructions on how to purchase bitcoin. Support via web is included too in case the victim encounters problems in the transaction they’ve made. Petya’s ransom is a lot cheaper compared to other ransomware, too.

Petya web page 1

Petya web page 2

Petya web page 3

Petya web page 4

On Step 4 of the payment procedure, the “next” button is disabled until they’ve confirmed that they already received the payment.

Petya support page

Petya’s support page

Below is a shot of ThreatTrack’s ThreatSecure Network dashboard catching Petya. Tools like ThreatSecure can detect and disrupt attacks in real time.

ThreatSecure Network catching Petya ransomware

ThreatSecure Network catching Petya ransomware

 

The post A Glimpse at Petya Ransomware appeared first on ThreatTrack Security Labs Blog.

To Reform and Institutionalize Research for Public Safety (and Security)

On October 3rd, 2014 a petition appeared on the Petitions.WhiteHouse.gov website titled "Unlocking public access to research on software safety through DMCA and CFAA reform". I encourage you to go read the text of the petition yourself.

While I believe that on the whole the CFAA and more urgently the DMCA need dramatic reforms if not to be flat-out dumped, I'm just not sure I'm completely on board with there this idea is going. I've discussed my displeasure for the CFAA on a few of our recent podcasts if you follow our Down the Security Rabbithole Podcast series, and I would likely throw a party if the DMCA were repealed tomorrow - but unlocking "research" broadly is dangerous.

There is no doubt in my mind that security research is critical in exposing safety and security issues in matters that affect the greater public good. However, let's not confuse legitimate research with thinly veiled extortion or a license to hack at will. We can all remember the incident Apple had where a hacker purportedly had exposed a flaw in their online forums, then to prove his point he exploited the vulnerability and extracted data of real users. All in the name of "research" right? I don't think so.

You see, what a recent conversation with Shawn Tuma has taught me is that under the CFAA we have one of these "I'll know it when I see it" conditions where a prosecuting attorney can choose to either go after someone, or look the other way if they believe they were acting in good faith and for the public good... or some such. This type of power makes me uncomfortable as it gives that prosecuting attorney way too much room. Room for what you ask? How about room to be swayed by a big corporation... I'm looking at you AT&T.

Let me lay out a scenario for you. Say you are a security professional interested in home automation and alarm systems. You purchase one, and begin to conduct research into the types of vulnerabilities one of these things is open to - since you'll be installing it in your home and all. You uncover some major design flaws, and maybe even a way to remotely disable the home alarm feature on thousands of units across the country. You want to notify the company, get them to fix the issue, and maybe get a little by-line credit for it. Only the company slaps a DMCA law suit on you for reverse engineering their product and you're in hot water. Clearly they have more money and attorneys than you do. Your choices are few - drop the research or face criminal prosecution. Odds are you're not even getting a choice.

In that scenario - it's clear that reforms are needed. Crystal clear, in fact.

The issue is we need to protect legitimate research from prosecutorial malfeasance while still allowing for laws to protect intellectual property and a company's security. So you see, the issue isn't as simple as opening up research, but much more subtle and deliberate.

How do we limit the law and protect legitimate research, while allowing for the protections companies still deserve? I think the answer lies in how we define a researcher. I propose that we require researchers to declare their research and its intent and draft ethical guidelines which can be agreed upon (and enforced on both ends) between the researcher and the organization being researched. There must be rules of engagement, and rules for "responsible and coordinated disclosure". The laws must be tweaked to shield the researched with declared intent and following the rules of engagement from being prosecuted by a company which is simply trying to skirt responsibility for safety, privacy and security. Furthermore, there must be provisions for matters that affect the greater good - which companies simply cannot opt out of.

Now, if you ask me if I believe this will happen any time soon, that's another matter entirely. Big companies will use their lobbying power to make sure this type of reform never happens, because it simply doesn't serve their self-interest. Having seen first-hand the inner workings of a large enterprise technology company - I know exactly how much profit is valued over security (or anything else, really). Profit now, and maybe no one will notice the big gaping holes later. That's just how it is in real life. But when public safety comes into play I think we will see a few major incidents where we have loss of life directly attributed to security flaws before we see any sort of reform. Of course when we do have serious incidents, they'll simply go after the hackers and shed any responsibility. That's just how these things work.

So in closing - I think there is a lot of work to be done here. First we need to more closely define and create formal understanding of security research. Once we're comfortable with that, we need to refine the CFAA and maybe get rid of the DMCA - to legitimize security research into the areas that affect public safety, privacy and security.

The Evolution of Mobile Security

Today, I posted a blog entry to the Oracle Identity Management blog titled Analyzing How MDM and MAM Stack Up Against Your Mobile Security Requirements. In the post, I walk through a quick history of mobile security starting with MDM, evolving into MAM, and providing a glimpse into the next generation of mobile security where access is managed and governed along with everything else in the enterprise. It should be no surprise that's where we're heading but as always I welcome your feedback if you disagree.

Here's a brief excerpt:
Mobile is the new black. Every major analyst group seems to have a different phrase for it but we all know that workforces are increasingly mobile and BYOD (Bring Your Own Device) is quickly spreading as the new standard. As the mobile access landscape changes and organizations continue to lose more and more control over how and where information is used, there is also a seismic shift taking place in the underlying mobile security models.
Mobile Device Management (MDM) was a great first response by an Information Security industry caught on its heels by the overwhelming speed of mobile device adoption. Emerging at a time when organizations were purchasing and distributing devices to employees, MDM provided a mechanism to manage those devices, ensure that rogue devices weren’t being introduced onto the network, and enforce security policies on those devices. But MDM was as intrusive to end-users as it was effective for enterprises.
Continue Reading

RSA Conference 2014

I'm at the RSA Conference this week. I considered the point of view that perhaps there's something to be said for abstaining this year but ultimately my decision to maintain course was based on two premises: (1) RSA didn't know the NSA had a backdoor when they made the arrangement and (2) The conference division doesn't have much to do with RSA's software group.

Anyway, my plan is to take notes and blog or tweet about what I see. Of course, I'll primarily be looking at Identity and Access technologies, which is only a subset of Information Security. And I'll be looking for two things: Innovation and Uniqueness. If your company has a claim on either of those in IAM solutions, please try to catch my attention.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Performing Clean Active Directory Migrations and Consolidations


Active Directory Migration Challenges

Over the past decade, Active Directory (AD) has grown out of control. It may be due to organizational mergers or disparate Active Directory domains that sprouted up over time, but many AD administrators are now looking at dozens of Active Directory forests and even hundreds of AD domains wondering how it happened and wishing it was easier to manage on a daily basis.

One of the top drivers for AD Migrations is enablement of new technologies such as unified communications or identity and access management. Without a shared and clearly articulated security model across Active Directory domains, it’s extremely difficult to leverage AD for authentication to new business applications or to establish the related business rules that may be based on AD attributes or security group memberships.

Domain consolidation is not a simple task. Whether you're moving from one platform to another, doing some AD security remodeling, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?

One of the biggest fears in Active Directory migration projects is that business users will lose access to their critical resources during the migration. To reduce the likelihood of that occurring, many project leaders choose to enable a dirty migration; they enable historical SIDs which carry old credentials and group memberships from the source domain and apply them to the new domain. Unfortunately, enabling historical SIDs proliferates one of the main challenges that initially drove the migration project. The dirty migration approach maintains the various security models that have been implemented over the years making AD difficult to manage and near impossible to understand who has what rights across the environment.

Clean Active Directory Migrations

The alternative to a dirty migration is to disallow historical SIDs and thereby enable a clean migration where rights are applied as-needed in an easy-to-manage and well articulated security model. Security groups are applied on resources according to an intentional model that is defined up-front and permissions are limited to a least-privilege model where only those who require rights actually get them.

All consolidation or migration projects aren't the same. The motivations differ, the technologies differ, and the Active Directory organizational structure and assets differ wildly. Most solutions on the market provide point A to point B migrations of Active Directory assets. This type of migration often contributes to making the problem worse over time. There's nothing wrong with using an Active Directory tool to help you perform an AD forest or domain migration, but knowing which assets to move and how to structure or even restructure them in the target domain is critical.

Enabling a clean migration and transforming the Active Directory security model requires a few steps to be followed. It starts with assessment and cleanup of the source Active Directory environments. You should assess what objects are out there, how they’re being used, and how they’re currently organized. Are there dormant user accounts or unused computer objects? Are there groups with overlapping membership? Are there permissions that are unused or inappropriate? Are there toxic or high-risk conditions in the environment? This type of intelligence enables visibility into which objects you need to move, how they're structured, how the current domain compares to the target domain, and where differences exist in GPO policies, schema, and naming conventions. The dormant and unused objects as well as any toxic or high-risk conditions can be remediated so that those conditions aren’t propagated to the target environment.

Once the initial assessment and cleanup is complete, a gap-analysis should be performed to understand where the current state differs from the intended model. Where possible, the transformation should be automated. Security groups can be created, for example, based on historical user activity so that group membership is determined by actual need. This is a key requirement for numerous legal regulations.

The next step is to perform a deep scan into the Active Directory forests and domains that will be consolidated and look at server-level permissions and infrastructure across Active Directory, File Systems, Security Policies, SharePoint, SQL Server, and more. This enables the creation of business rules that will transform existing effective permissions into the target model while adhering to new naming conventions and group utilization. Much of this transformation should be automated to avoid human error and reduce effort.

Maintaining a Clean Active Directory

Once the migration or consolidation project is complete and adherence to the intended security model has been enforced, it’s vital that a program is in place to maintain Active Directory in its current state. There are a few capabilities that can help achieve this goal.

First, a mandatory periodic audit should be enforced. Security Group owners should confirm that groups are being used as-intended. Resource owners should confirm that the right people have the right level of access to their resources. Business managers should confirm that their people have access to the right resources. These reviews should be automated and tracked to ensure that these reviews are completely thoroughly and on-time.

Second, tools should be implemented that provide visibility into the environment answering questions as they come up. When a security administrator needs to see how a user is being granted rights to something they should perhaps not have, they’ll need tools that provide answers in a timely fashion.

Third, a system-wide scan should be conducted regularly to identify any toxic or high-risk conditions that occur over time. For example, if a user account becomes dormant, notification should be sent out according to business rules. Or if a group is nested within itself perhaps ten layers deep, you want an automated solution to discover that condition and provide related reporting.

Finally, to ensure adherence to Active Directory security policies, a real-time monitoring solution should be put in place to enforce rules, prevent unwanted changes via event blocking, and to maintain an audit trail of critical administrative activity.

Complete visibility across the entire Active Directory infrastructure enables a clean AD domain consolidation while making life easier for administrators, improving security, and enabling adoption of new technologies

About the Author

Matt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.

Reduce Risk by Monitoring Active Directory

Active Directory (AD) plays a central role in securing networked resources. It typically serves as the front gate allowing access to the network environment only when presented with valid credentials. But Active Directory credentials also serve to grant access to numerous resources within the environment. For example, AD group memberships are commonly used to manage access to unstructured data resources such as file systems and SharePoint sites. And a growing number of enterprise applications leverage AD credentials to grant access to their resources as well.

Active Directory Event Monitoring Challenges

Monitoring and reporting on Active Directory accounts, security groups, access rights, administrative changes, and user behavior can feel like a monumental task. Event monitoring requires an understanding of which events are critical, where those events occur, what factors might indicate increased risk, and what technologies are available to capture those events.

Understanding which events to ignore is as important and knowing which are critical to capture. You don't need immediate alerts on every AD User or Group change which takes place but you want visibility into critical high-risk changes: Who is adding AD user accounts? ...adding a user to an administrative AD group? ...making Group Policy (GPO) changes?

Active Directory administrators face a complex challenge that requires visibility into events as well as infrastructure to ensure proper system functionality. A complete AD monitoring solution doesn't stop at user and group changes. It also looks at Domain Controller status: which services are running, disk space issues, patch levels, and similar operational and infrastructure needs. There are numerous technical requirements to get that level of detail.

AD administrators require full access in the environment which presents another set of challenges. How do you enable administrators to do their job while controlling certain high-risk activity such as snooping on sensitive data or accidentally making GPO changes to important security policies? Monitoring Active Directory effectively includes either preventing unintended activities through change blocking or deterring activities through visible monitoring and alerting.

Monitoring Active Directory Effectively

Effective audit and monitoring solutions for Active Directory address the numerous challenges discussed above by providing a flexible platform that covers typical scenarios out-of-the-box without customization but also allows extensibility to accommodate the unique requirements of the environment.

Data collection is the cornerstone of any Active Directory monitoring and audit solution. Collection must be automated, reliable, and non-intrusive on the target environment. Data that can be collected remotely without agents should be. But, when requirements call for at-the-source monitoring, for example when you want to see WHO did it, what machine they came from, capture before-and-after values, or block certain activities, a real-time agent should be available to accommodate those needs. The data collection also needs to scale to the environment’s size and performance requirements.

Once data has been collected, both batch and real-time per-event analysis are required to meet common requirements. For example, you may want an alert on changes to administrative groups but you don’t want alerts on all group changes. Or you may want a report that highlights all empty groups or groups with improper nesting conditions. This analysis should provide intelligence out-of-the-box based on industry expertise and commonly requested reporting. But it should also enable unique business questions to be answered. Every organization uses Active Directory in unique ways and custom reporting is an extremely common requirement.

Finally, once data collection and analysis phases have been completed, AD monitoring solutions should provide a flexible reporting interface that provides access to the intelligence that has been cultivated. As with collection and analysis, the reporting functionality should include commonly requested reports with no customization but should also enable report customization and extensibility. Reporting should include web-accessible reports, search and filtering, access to the raw and post-analysis data, and email or other alerting.

An effective Active Directory monitoring solution provides deep insight on all things Active Directory. It should enable user, group and GPO change detection as well as reporting on anomalies and high-risk conditions. It should also provide deep analysis on users, groups, OUs, computer objects, and Active Directory infrastructure. Because the types of reports required by different teams (such as security and operations) may differ, it may be prudent to provide slightly different interfaces or report sets for the various intended audiences.

When real-time monitoring of Active Directory Users, Groups, OUs, and other changes (including activity blocking) are important, the solution should provide advanced filtering and response on nearly all Active Directory events as well as an audit trail of changes and attempts with all relevant information.

Benefits of Active Directory Monitoring

The three most common business drivers for Active Directory monitoring are improved security, improved audit response, and simplified administration. Active Directory audit and monitoring solutions make life easier for administrators while improving security across the network environment. This is especially important as AD becomes increasingly integrated into enterprise applications.
Some common use-cases include:
  • Monitor Active Directory user accounts for create, modify and delete events. Capture the user account making the change along with the affected account information, changed attributes, time stamp, and more. This monitoring capability acts independent of the Security Event log and is non-reputable.
  • Monitor Active Directory group memberships and provide reports and/or alerts in real time when memberships change on important groups such as the Domain Admins group.
  • Report on failed attempts in addition to successful attempts. Filter on specific types of events and ignore others.
  • Report on Active Directory dormant accounts, empty groups, unused groups, large groups, and other high-risk conditions to empower administrators with actionable information.
  • Automate event response based on policy with email alerts, remediation processes, or record the event to a file or database.
Active Directory Monitoring and Reporting doesn't need to feel complicated or overwhelming. Solutions are available to simplify the process while providing increased security and reduced risk.

About the Author

Matt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.

Unstructured Data into Identity & Access Governance

I've written before about the gap in identity and access management solutions related to unstructured data.

When I define unstructured data to people in the Identity Management space, I think the key distinguishing characteristic is that there is no entitlement store with which an IAM or IAG solution can connect to gather entitlement information. 

On File Systems, for example, the entitlements are distributed across shares & folders, inherited through the file tree structure, applied through group memberships that may be many levels deep, and there's no common security model to make sense of it.

STEALTHbits has the best scanner in the industry (I've seen it go head-to-head in POC's) to gather users, groups, and permissions across unstructured data environments and the most flexible ability to perform analysis that (1) uncovers high-risk conditions (such as open file shares, unused permissions, admin snooping, and more), (2) identifies content owners, and (3) makes it very simple to consume information on entitlements (by user, by group, or by resource).

It's a gap in the identity management landscape and it's beginning to show up on customer agendas. Let us know if we can help. Now, here's a pretty picture:

STEALTHbits adds unstructured data into IAM and IAG solutions.

Active Directory Unification

[This is a partial re-post of an entry on the STEALTHbits blog. I think it's relevant here and open for discussion on the concepts surrounding clean migrations and AD unification.]

It’s no secret that over the past decade, Active Directory has grown out of control across many organizations. It’s partly due to organizational mergers or disparate Active Directory domains that sprouted up over time, but you may find yourself looking at dozens or even hundreds of Active Directory domains and realize that it's time to consolidate. And it probably feels overpowering. But despite the effort in front of you, there’s an easy way and a right way.

Domain consolidation is not a simple task. Whether you're moving from one platform to another, trying to implement a new security model, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?

According to Gartner analyst Andrew Walls, “The allure of a single AD forest with a simple domain design is not fool’s gold. There are real benefits to be found in a consolidated AD environment. A shared AD infrastructure enables user mobility, common user provisioning processes, consolidated reporting, unified management of machines, etc.

Walls goes on to discuss the politics, cost justification, and complexity of these projects noting that “An AD consolidation has to unite and rationalize the ID formats, password policy objects, user groups, group policy objects, schema designs and application integration methods that have grown and spread through all of the existing AD environments. At times, this can feel like spring cleaning at the Aegean stables. Of course, if you miss something, users will not be able to log in, or find their file shares, or access applications. No pressure.

Walls offers advice on how to avoid some of the pain. “You fight proliferation of AD at every turn and realize that consolidation is not a onetime event. The optimal design for AD is a single domain within a single forest. Any deviation from this approach should be justified on the basis of operational requirements that a unified model cannot possibly support.

What does this mean for you? Well, the most significant take-away from Walls’ advise is that it’s not a onetime event. AD Unification is an ongoing effort. You don’t simply move objects from point-A to point-B and then pack it in for the day. The easy way fails to meet the core objectives of an improved security model, simplified management, reduced cost, and a common provisioning process (think integration with Identity Management solutions).

If you take everything from three source domains and simply move it all to a target domain, you haven’t achieved any of the objectives other than now having a single Active Directory. There’s a good chance that your security model will remain fragmented, management will become more difficult, and your user provisioning processes will require additional logic to accommodate for the new mess. On a positive note, if this model is your intent, there are numerous solutions on the market that will help.

STEALTHbits, of course, embraces the right way. “Control through Visibility” is about improving your security posture and your ability to manage IT by increasing your visibility into the critical infrastructure.


If you'd like to learn more about the solution, you can start by reading the rest of this blog entry or contact STEALTHbits.

Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.