On November 23, 2018, both Australia and Chinese Taipei joined the APEC Cross-Border Privacy Rules (“CBPR”) system. The system is a regional multilateral cross-border transfer mechanism and an enforceable privacy code of conduct and certification developed for businesses by the 21 APEC member economies.
The Australian Attorney-General’s Department recently announced that APEC endorsed Australia’s application to participate and that the Department plans to work with both the Office of the Australian Information Commissioner and organizations to implement the CBPR system requirements in a way that ensures long-term benefits for Australian businesses and consumers.
In Chinese Taipei, the National Development Council announced that Chinese Taipei has joined the system. According to the announcement, Chinese Taipei’s participation will spur local enterprises to seek overseas business opportunities and help shape conditions conducive to cross-border digital trade.
Australia and Chinese Taipei become the seventh and eighth countries to participate in the system, joining the U.S., Mexico, Canada, Japan, South Korea and Singapore. Both nations’ decisions to join the system further highlights the growing international status of the CBPR system, which implements the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Several other APEC economies are actively considering joining.
The Agency of Access to Public Information (Agencia de Acceso a la Información Pública) (“AAIP”) has approved a set of guidelines for binding corporate rules (“BCRs”), a mechanism that multinational companies may use in cross-border data transfers to affiliates in countries with inadequate data protection regimes under the AAIP.
As reported by IAPP, pursuant to Regulation No. 159/2018, published December 7, 2018, the guidelines require BCRs to bind all members of a corporate group, including employees, subcontractors and third-party beneficiaries. Members of the corporate group must be jointly liable to the data subject and the supervisory authority for any violation of the BCRs.
Other requirements include:
- restrictions on the processing of special categories of personal data and on the creation of files containing personal data relating to criminal convictions and offenses;
- protections such as providing for the right to object to the processing of personal data for the purpose of unsolicited direct marketing;
- complaint procedures for data subjects that include the ability to institute a judicial or administrative complaint using their local venue; and
- data protection training to personnel in charge of data processing activities.
BCRs also should contemplate the application of general data protection principles, especially the legal basis for processing, data quality, purpose limitation, transparency, security and confidentiality, the data subjects’ rights, and the restriction to subsequent cross-border data transfer to non-adequate jurisdictions. BCRs that do not reflect the guidelines’ provisions must submit the relevant material to the AAIP for approval within 30 calendar days from the date of transfer. Approval is not required if BCRs that track the guidelines are used.
In connection with its hearings on data security, the Federal Trade Commission hosted a December 12 panel discussion on “The U.S. Approach to Consumer Data Security.” Moderated by the FTC’s Deputy Director for Economic Analysis James Cooper, the panel featured private practitioners Lisa Sotto, from Hunton Andrews Kurth, and Janis Kestenbaum, academics Daniel Solove (GW Law School) and David Thaw (University of Pittsburgh School of Law), and privacy advocate Chris Calabrese (Center for Democracy and Technology). Lisa set the stage with an overview of the U.S. data security framework, highlighting the complex web of federal and state rules and influential industry standards that result in a patchwork of overlapping mandates. Panelists debated the effect of current law and enforcement on companies’ data security programs before turning to the “optimal” framework for a U.S. data security regime. Among the details discussed were establishing a risk-based approach with a baseline set of standards and clear process requirements. While there was not uniform agreement on the specifics, the panelists all felt strongly that federal legislation was warranted, with the FTC taking on the role of principal enforcer.
ISC2 describes the CISSP as a way to prove “you have what it takes to effectively design, implement and manage a best-in-class cybersecurity program”. It is one of the primary certifications used as a stepping stone in your cybersecurity career. Traditionally, students have two different options to gain this certification; self-study or a bootcamp. Both of these options have pros and cons, but neither is the best.
Bootcamps are a popular way to cram for the certification test. Students spend five days in total immersion into the topics of the CBK. This is an easy way to pass the exam for lots of students because it focuses them on the CISSP study materials for the bootcamp timeframe. But there are a few negatives to this model. First is the significant cost. The typical prices we see are between $3500 and 5000 with outliers as high as almost $7000. The second issue is that it takes the student away from their life for the week. Finally, most people finish the bootcamp with the knowledge to pass the exam but since it is crammed in, they quickly forget most of the information.
Self-Study is the other common mechanism for studying for the CISSP exam. This allows a dedicated student to learn the information at their pace and time frame. It also allows for them to decide how much to spend. From books to online videos and practice exams the costs vary. The main problem with the method is that students often get distracted by life and work while trying to accomplish it.
But there is an answer that combines the benefits of both previous options. Secure Ideas has developed a mentorship program designed to provide the knowledge necessary to pass the certification, while working through the common body of knowledge (CBK). All done in a manner that encourages retention of the knowledge. And it is #affordabletraining!
The mentorship program is designed as a series of weekly mentor led discussion and review sessions along with various student support and communication methods, spanning a total of 9 weeks. These work together to provide the student a solid foundation to not only help in passing the certification but to continue as a collection of information for everyday work. This class is set up to cover the 8 domains of the ISC2 CBK:
- Security and Risk Management
- Asset Security
- Security Architecture and Engineering
- Communication and Network Security
- Identity and Access Management (IAM)
- Security Assessment and Testing
- Security Operations
- Software Development Security
The Professionally Evil CISSP Mentorship program uses multiple communication and knowledge sharing paths to build a comprehensive learning environment focused on both passing the CISSP certification and gaining a deep understanding of the CBK.
The program consists of the following parts:
- Official study guide book
- Weekly live session with instructor(s)
- Live session will also be recorded
- Private Slack team for students and instructors to communicate regularly
- Practice exams
- While we believe students will pass on their first try, we also include the option for students to take the program as many times as they want, any time we offer it. 🙂
You can sign up for the course over at https://attendee.gototraining.com/r/2538511060126445313 for only $1000. Our early bird pricing is $800 and is good until January 31. Just use the Coupon code EARLYBIRD at checkout. Veterans, active duty military and first responders also get a significant discount. Email firstname.lastname@example.org for more information.
Professionally Evil Insights
Why are both Apple Inc. (NASDAQ: AAPL) and Google Inc. (NASDAQ: GOOG) still permitting clearly ill-conceived user tracking via applications marketed and sold on each company's customer-facing app stores? Surely your privacy and freedom means more to you than the false-and-temporary-convenience of finger, voice and script actuated conveyances of information best retreived in another manner.
In a recent company board strategy meeting the CFO presented the financial forecast and outcome and made some interesting comments about fiscal risks and opportunities on the horizon. The COO
The post Why the CISO’s Voice Must be Heard Beyond the IT Department appeared first on The Cyber Security Place.
Companies understand that organizational culture is an important differentiator to set their company apart from the competition. However, joining the dots between culture and information security management has taken some
Soup To Nuts Identity Solutions From Two Of The Reasons Why Security Flaws Persist In Financial and Computational Systems?
I, for one, will utilize my barely visible thumb whorls as proof of identity, rather than use of these clowns-of-code-combinatorial-output. Code Complete at Microsoft or Mastercard? Puhleaze... The former can barely patch it's own desktop and server code successfully month-to-month, and the latter suffers from declining security capabilities since the failed-for-purpose-deployment-and-implementation of the so-called security chips in newly issued credit cards. Both comapanies suffer from proverbial lack-of-focused-leadership on their core businesses.
CEOs, Boards of Directors and Trustees are now realising how fatal cybersecurity failures can really be.The IT industry has undoubtedly shone a bright light on the role of the Chief
The post Getting cybersecurity to the top of the boardroom agenda appeared first on The Cyber Security Place.
On December 4, 2018, the Federal Trade Commission published a notice in the Federal Register indicating that it is seeking public comment on whether any amendments should be made to the FTC’s Identity Theft Red Flags Rule (“Red Flags Rule”) and the duties of card issuers regarding changes of address (“Card Issuers Rule”) (collectively, the “Identity Theft Rules”). The request for comment forms part of the FTC’s systematic review of all current FTC regulations and guides. These periodic reviews seek input from stakeholders on the benefits and costs of specific FTC rules and guides along with information about their regulatory and economic impacts.
The Red Flags Rule requires certain financial entities to develop and implement a written identity theft detection program that can identify and respond to the “red flags” that signal identity theft. The Card Issuers Rule requires that issuers of debit or credit cards (e.g., state credit unions, general retail merchandise stores, colleges and universities, and telecom companies) implement policies and procedures to assess the validity of address change requests if, within a short timeframe after receiving the request, the issuer receives a subsequent request for an additional or replacement card for the same account.
The FTC is seeking comments on multiple issues, including:
- Is there a continuing need for the specific provisions of the Identity Theft Rules?
- What benefits have the Identify Theft Rules provided to consumers?
- What modifications, if any, should be made to the Identify Theft Rules to reduce any costs imposed on consumers?
- What modifications, if any, should be made to the Identify Theft Rules to increase their benefits to businesses, including small businesses?
- What evidence is available concerning the degree of industry compliance with the Identify Theft Rules?
- What modifications, if any, should be made to the Identify Theft Rules to account for changes in relevant technology or economic conditions?
The comment period is open until February 11, 2019, and instructions on how to make a submission to the FTC are included in the notice.
By Julia Sowells Senior Information Security Specialist at Hacker Combat, In the turn of the century 18 years ago, people have embraced Web 2.0, a new dynamic web replacing the static
The post The Only Counter Strategy Against Data Loss: Reliable Backup Methodology appeared first on The Cyber Security Place.
On November 9, 2018, Serbia’s National Assembly enacted a new data protection law. The Personal Data Protection Law, which becomes effective on August 21, 2019, is modeled after the EU General Data Protection Regulation (“GDPR”).
As reported by Karanovic & Partners, key features of the new Serbian law include:
- Scope – the Personal Data Protection Law applies not only to data controllers and processors in Serbia but also those outside of Serbia who process the personal data of Serbian citizens.
- Database registration – the Personal Data Protection Law eliminates the previous requirement for data controllers to register personal databases with the Serbian data protection authority (“DPA”), though they will be required to appoint a data protection officer (“DPO”) to communicate with the DPA on data protection issues.
- Data subject rights – the new law expands the rights of data subjects to access their personal data, gives subjects the right of data portability, and imposes additional burdens on data controllers when a data subject requests the deletion of their personal data.
- Consent – the Personal Data Protection Law introduces new forms of valid consent for data processing (including oral and electronic) and clarifies that the consent must be unambiguous and informed. The prior Serbian data protection law only recognized handwritten consents as valid.
- Data security – the new law requires data controllers to implement and maintain safeguards designed to ensure the security of personal data.
- Privacy by Design – the new law obligates data controllers to implement privacy by design when developing new products and services and to conduct data protection impact assessments for certain types of data processing.
- Data transfers – the Personal Data Protection Law expands the ways in which personal data may be legally transferred from Serbia. Previously, data controllers were required to obtain the approval of the Serbian DPA for any transfers of personal data to non-EU countries. The new law permits personal data transfers based on standard contractual clauses and binding corporate rules approved by the Serbian DPA. Organizations can also transfer personal data to countries deemed to provide an adequate level of data protection by the EU or the Serbian DPA or when the data subject consents to the transfer.
- Data breaches – like the GDPR, the new law requires data controllers to notify the Serbian DPA within 72 hours of a data breach and will require them to notify individuals if the data breach is likely to result in a high risk to the rights and freedoms of individuals. Data processors must also notify the relevant data controllers in the event of a data breach.
The new law also imposes penalties for noncompliance, but these are significantly lower than those contained in the GDPR. The maximum fines in the new Serbian law are only 17,000 Euros, while the maximum fines in the GDPR can reach up to 20 million Euros or 4% of an organization’s annual global turnover.
For years, cybersecurity professionals across the globe have been highly alarmed by threats appearing in the form of malware, including Trojans, viruses, worms, and spear phishing attacks. And this year was no different. 2018 witnessed its fair share of attacks, including some new trends: credential theft emerged as a major concern, and although ransomware remains […]
The post Evolving Cyberthreats: It’s Time to Enhance Your IT Security Mechanisms appeared first on Radware Blog.
Over the last eight years, one of the main focuses of Secure Ideas has been education. One responsibility we take very seriously is that of growing the skills within our clients and the public, with the objective of raising the bar in security. This mindset and core passion of Secure Ideas is because we all believe that we stand on the shoulders of giants. As each of us has grown into the roles we currently hold, we were not only shaped and developed by our own experiences, but also by the knowledge shared by others. This desire to learn and grow is one of the main things that make me proud to be a part of the security community.
However, there are a couple of significant problems with our industry: First, information security needs are growing faster than skilled personnel are learning. Second, the cost of training has increased outrageously over the past decade.
The first issue has been discussed for almost as long as I have been involved in information security. Even Alan Paller of the SANS Institute has been speaking about the skills gap for over a decade! The second issue is even worse as it makes it harder to fix the first. Training costs for a single class often exceed $5000 without even factoring in travel and the time away from work. So how do we fix this?
At Secure Ideas, we have decided that it is our responsibility as active practitioners to help fix this lack of affordable training and help address the skills gap. To that end, we are committed to the following for 2019:
- First, we want to announce our Professionally Evil Spring Break event. This 3-day event will host two classes; Professionally Evil Network Security and Professionally Evil Application Security. The first will focus on network penetration testing and the second focuses on application security and assessments. Either class is only $750, discounted to an early bird price of $600 until January 18, 2019. Moreover veterans, active duty military and first responders get either for 50% off.
- Second, our Secure Ideas Training site has recorded classes starting at $25 each and vets get them for free! And our webcasts will continue to be run as often as we can.
- Third, we will continue to support and release our open-source training products such as SamuraiWTF and the Professionally Evil Web Penetration Testing 101 course.
We hope that together we can all help increase the skills of our industry and provide affordable training for all. Let us know if you have any questions or if you would like us to run a private training for your organization.
Professionally Evil Insights
We are introducing the Incident Handling & Response Professional (IHRP) training course on December 11, 2018. Find out more and register for an exciting preview webinar.
No matter the strength of your company’s defense strategy, it is inevitable that security incidents will happen. Poor and/or delayed incident response has caused enormous damages and reputational harm to Yahoo, Uber, and most recently Facebook, to name a few. For this reason, Incident Response (IR) has become a crucial component of any IT Security department and knowing how to respond to such events is growing to be a more and more important skill.
Aspiring to switch to a career in Incident Response? Here’s how our new Incident Handling & Response Professional (IHRP) training course can help you learn the necessary skills and techniques for a successful career in this field.
Incident Handling & Response Professional (IHRP)
The Incident Handling & Response Professional course (IHRP) is an online, self-paced training course that provides all the advanced knowledge and skills necessary to:
- Professionally analyze, handle and respond to security incidents, on heterogeneous networks and assets
- Understand the mechanics of modern cyber attacks and how to detect them
- Effectively use and fine-tune open source IDS, log management and SIEM solutions
- Detect and even (proactively) hunt for intrusions by analyzing traffic, flows and endpoints, as well as utilizing analytics and tactical threat intelligence
This training is the cornerstone of our blue teaming course catalog or, as we called it internally, “The PTP of Blue Team”.
Discover This Course & Get An Exclusive Offer
Take part in an exciting live demonstration and discover the complete syllabus of our latest course, Incident Handling & Response Professional (IHRP), on December 11. During this event, all the attendees will get their hands on an exclusive launch offer. Stay tuned!
Be the first to know all about this modern blue teaming training course, join us on December 11.
Connect with us on Social Media:
The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP recently submitted formal comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.
In its comments, CIPL commends NTIA for initiating a renewed national debate on updating the U.S. privacy framework, and notes that its approach—starting with the intended outcomes and goals of any privacy regime—is well suited to lay the foundation for a legislative proposal in the future.
Responding to the questions raised in the request for comment, CIPL makes the following observations and recommendations with regard to NTIA’s intended core outcomes and the high level goals of any new U.S. privacy framework:
- Transparency: CIPL agrees transparency should be a key outcome of any privacy framework and must be user-centric, contextual and tailored toward the specific audience and purpose. This can be achieved by implementing companywide privacy management and accountability frameworks.
- Control: CIPL believes that control should be a component of a new privacy framework in contexts where it is appropriate, and should reference mechanisms that empower consumers beyond individual choice or consent. However, the framework’s general focus should be putting the onus on organizations to use data responsibly and accountably to protect consumers from harm regardless of their individual level of engagement.
- Reasonable Minimization: CIPL supports the inclusion of reasonable minimization as an outcome of a new data protection framework, and further agrees with NTIA’s qualification that minimization should be reasonable and appropriate to the context and risk of privacy harm. These qualifiers are very important given the enormous potential of personal data for driving economic growth and societal benefits in the digital economy.
- Security: CIPL fully agrees with the inclusion of security in the list of outcomes, and notes the importance of allowing organizations flexibility in determining security measures that are reasonable and appropriate to the context at hand. In addition, a security outcome should provide for the adoption of appropriate breach response measures (e.g., notification requirements) and should permit organizations to use personal data for the development and implementation of security tools and related legitimate purposes, such as incident prevention, detection and monitoring.
- Access and Correction: While CIPL agrees that access, correction and deletion is an important outcome, such rights cannot be absolute and should not interfere with relevant obligations of an organization, other societal goals or legal rights of consumers and other third parties. Where exercising such rights would be inappropriate or impose unreasonable burdens on organizations, part of the solution lies in providing assurances to consumers that their personal information is protected by the full range of available accountability measures and will not be used for harmful purposes.
- Risk Management: CIPL welcomes NTIA’s characterization of risk management as the “core” of its approach to privacy protection. Identifying harms and addressing them specifically has the advantage of enabling organizations to prioritize their compliance measures and focus resources on what is most important, thereby strengthening both consumer privacy and organizations’ ability to engage in legitimate and accountable uses of personal information. It also means that we do not need to establish set categories of so-called sensitive information or certain predetermined high-risk processing activities, as any actual sensitivity or high-risk character will be determined and addressed in each risk assessment process.
- Accountability: CIPL strongly agrees with including accountability in the essential outcomes of a privacy framework. It is a key building block of modern data protection and is essential for the future of the digital society where laws alone cannot deliver timely, flexible and innovative solutions. CIPL recommends that NTIA clarify and elaborate upon this important concept in line with its globally accepted meaning, including in the APEC Privacy Framework and the GDPR, as well as other relevant international privacy regimes that incorporate this concept.
- Complaint-handling and Redress: In addition to the above outcomes, CIPL recommends the additional outcome of complaint-handling and redress. Consumers should be able to expect that organizations are able to reliably, quickly and effectively respond to actionable complaints and provide redress where appropriate. As it is consumer-facing, it should be a separately stated outcome that consumers can expect from a privacy framework.
High-Level Goals for Federal Action
- Harmonization: CIPL supports the effort to harmonize the U.S. privacy framework on the federal level, including through federal legislation that preempts inconsistent state privacy laws. CIPL recommends that NTIA clarify whether the proposed framework intends to cover employees, and suggests that a new framework should be focused on privacy in the consumer and commercial context and that the precise term “consumer” be defined to avoid legal uncertainty.
- Legal Clarity and Flexibility to Innovate: Clarity and flexibility in a privacy framework can be achieved through an approach based on organizational accountability risk assessment. With respect to risk, agreement around methodologies for privacy assessments, guidance on types of risk and the sharing of organizational best practices can also significantly contribute to legal clarity without undermining the flexibility to innovate.
- Comprehensive Application: CIPL supports a comprehensive baseline privacy law that applies to all organizations, preempts inconsistent state laws, amends or replaces inconsistent federal privacy laws where appropriate, and otherwise works with or around well-functioning existing sectoral laws.
- Risk and Outcome-based Approach: CIPL agrees with the goal of creating a risk and outcome based approach to privacy regulation. Employing such an approach places the burden of protecting consumers directly where it belongs – on businesses that use personal data, rather than on consumers, who in an increasing number of contexts should not and realistically cannot be tasked with understanding in detail and managing for themselves complex data uses or constantly making choices about them.
- Interoperability: Maximizing interoperability between different legal and privacy regimes should be a top priority goal for the United States. Any new privacy framework for the U.S. should continue to enable the free, responsible and accountable flow of data across borders.
- Incentivizing Privacy Research: CIPL fully agrees with the goal of having the U.S. government encourage and incentivize research into and development of products and services that improve privacy protections. However, this goal should be broadened and amplified along the lines of the argument for incentivizing organizational accountability generally. This enables a race to the top whereby organizations not only strive to comply with the bare minimum of what is legally required but are incentivized and rewarded for heightened levels of organizational accountability that benefit all stakeholders.
- FTC Enforcement: CIPL agrees that the Federal Trade Commission should be the principal federal agency to enforce any new comprehensive U.S. privacy legislation and should be appropriately resourced as such. Exactly how a new privacy framework and the FTC as the principal federal agency should interact with other federal functional regulators and sectoral privacy laws should be carefully considered and worked out with input from all relevant stakeholders.
- Scalability: CIPL agrees that enforcement should be proportionate to the scale and scope of the information an organization is handling and should be outcome-based. With increased responsibilities under a broader privacy law, the FTC will have to ensure that its current approach is adapted to the changes in the scope and nature of its responsibilities.
- Enabling Effective Use of Personal Information: In addition to the above goals for federal action, CIPL suggests the additional goal of enabling broad and effective uses of personal information for the benefit of economic development and societal progress, as well as for the benefit of individuals, particularly the data subjects. Due to their supervisory position, modern data protection and privacy enforcement authorities have the responsibility, in addition to protecting consumer privacy, to safeguard and facilitate the beneficial potential of such information and, therefore, the full range of responsible and accountable data uses.
Following consideration of the comments it receives, CIPL recommends that NTIA takes a holistic and deliberate approach toward developing a comprehensive privacy law that accomplishes the items discussed in the request for comment. One possible next step could be to actually articulate the outcomes and goals in draft legislative language to provide a clearer basis for further discussion on the precise elements and articulation of each of them. CIPL recommends an iterative process between NTIA and other public and private sector stakeholders towards that goal.
At the second annual Infosecurity North America conference at the Jacob Javits Convention Center in New York, Tom Brennan, US chairman, CREST International, moderated a panel called Securing the Workforce: Building, Maintaining and Measuring
On November 9, 2018, the European Commission (“the Commission”) submitted comments to the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) in response to its request for public comments on developing the administration’s approach to consumer privacy.
In its comments, the Commission welcomes and agrees with many of the high-level goals identified by NTIA, including harmonization of the legal landscape, incentivizing privacy research, employing a risk-based approach and creating interoperability at a global level. The Commission also welcomes that the key characteristics of a modern and flexible privacy regime (i.e., an overarching law, a core set of data protection principles, enforceable individual rights and an independent supervisory authority with effective enforcement powers) are also at the core of NTIA’s proposed approach to consumer privacy. The Commission structured its specific suggestions around these key characteristics.
In particular, the Commission makes specific suggestions around:
- Harmonization: The Commission notes that overcoming regulatory fragmentation associated with an approach based on sectoral law in favor of a more harmonized approach would create a level playing field, and provide necessary certainty for organizations while ensuring consistent protection for individuals.
- Data Protection Principles: The Commission commends NTIA on the inclusion of certain core data protection principles such as reasonable minimization, security, transparency and accountability, but suggests the further explicit inclusion of other principles such as lawful data processing (i.e., the requirement to process data pursuant to a legal basis, such as consent), purpose specification, accuracy and specific protections for sensitive categories of data.
- Breach Notification: The Commission suggests the specific inclusion of a breach notification requirement to enable individuals to protect themselves from and mitigate any potential harm that might result from a data breach. While there are already state breach notification laws in place, the Commission believes organizations and individuals could benefit from the harmonization of such rules.
- Individual Rights: The Commission believes that any proposal for a privacy regime should go beyond the inclusion of only traditional individual rights, such as access and correction, and should include other rights regarding automated decision-making (e.g., the right to explanation or to request human intervention) and rights around redress (e.g., the right to lodge a complaint and have it addressed, and the right to effective judicial redress).
- Oversight and Enforcement: The Commission notes that the effective implementation of privacy rules critically depends on having robust oversight and enforcement by an independent and well-resourced authority. In this regard, the Commission recommends strengthening the FTC’s enforcement authority, the introduction of mechanisms to ensure effective resolution of individual complaints and the introduction of deterrent sanctions.
The Commission notes in its response that while this consultation only covers a first step in a process that might lead to federal action, it stands ready to provide further comments on a more developed proposal in the future.
NTIA’s request for comments closed on November 9, 2018 and NTIA will post the comments it received online shortly.
On November 8, 2018, Privacy International (“Privacy”), a non-profit organization “dedicated to defending the right to privacy around the world,” filed complaints under the GDPR against consumer marketing data brokers Acxiom and Oracle. In the complaint, Privacy specifically requests the Information Commissioner (1) conduct a “full investigation into the activities of Acxiom and Oracle,” including into whether the companies comply with the rights (i.e., right to access, right to information, etc.) and safeguards (i.e., data protection impact assessments, data protection by design, etc.) in the GDPR; and (2) “in light of the results of that investigation, [take] any necessary further [action]… that will protect individuals from wide-scale and systematic infringements of the GDPR.”
The complaint alleges that the companies’ processing of personal data neither comports with the consent and legitimate interest requirements of the GDPR, nor the GDPR’s principles of:
- transparency (specifically relating to sources, recipients and profiling);
- fairness (considering individuals’ reasonable expectations, the lack of a direct relationship, and the opaque nature of processing);
- lawfulness (including whether either company’s reliance on consent or legitimate interest is justified);
- purpose limitation;
- data minimization; and
The complaint emphasizes that Acxiom and Oracle are illustrative of the “systematic” problems in the data broker and AdTech ecosystems, and that it is “imperative that the Information Commissioner not only investigate these specific companies, but also take action in respect of other relevant actors in these industries and their practices.”
In addition to the complaint against Acxiom and Oracle, Privacy submitted two separate joined complaints against credit reference data brokers Experian and Equifax, and AdTech data brokers Quantcast, Tapad and Criteo.
On November 6, 2018, the French Data Protection Authority (the “CNIL”) published its own guidelines on data protection impact assessments (the “Guidelines”) and a list of processing operations that require a data protection impact assessment (“DPIA”). Read the guidelines and list of processing operations (in French).
The Guidelines aim to complement guidelines on DPIA adopted by the Article 29 Working Party on October 4, 2017, and endorsed by the European Data Protection Board (“EDPB”) on May 25, 2018. The CNIL crafted its own Guidelines to specify the following:
- Scope of the obligation to carry out a DPIA. The Guidelines describe the three examples of processing operations requiring a DPIA provided by Article 35(3) of the EU General Data Protection Regulation (“GDPR”). The Guidelines also list nine criteria the Article 29 Working Party identified as useful in determining whether a processing operation requires a DPIA, if that processing does not correspond to one of the three examples provided by the GDPR. In the CNIL’s view, as a general rule a processing operation meeting at least two of the nine criteria requires a DPIA. If the data controller considers that processing meeting two criteria is not likely to result in a high risk to the rights and freedoms of individuals, and therefore does not require a DPIA, the data controller should explain and document its decision for not carrying out a DPIA and include in that documentation the views of the data protection officer (“DPO”), if appointed. The Guidelines make clear that a DPIA should be carried out if the data controller is uncertain. The Guidelines also state that processing operations lawfully implemented prior to May 25, 2018 (e.g., processing operations registered with the CNIL, exempt from registration or recorded in the register held by the DPO under the previous regime) do not require a DPIA within a period of 3 years from May 25, 2018, unless there has been a substantial change in the processing since its implementation.
- Conditions in which a DPIA is to be carried out. The Guidelines state that DPIAs should be reviewed regularly—at minimum, every three years—to ensure that the level of risk to individuals’ rights and freedoms remains acceptable. This corresponds to the three-year period mentioned in the draft guidelines on DPIAs adopted by the Article 29 Working Party on April 4, 2017.
- Situations in which a DPIA must be provided to the CNIL. The Guidelines specify that data controllers may rely on the CNIL’s sectoral guidelines (“Referentials”) to determine whether the CNIL must be consulted. If the data processing complies with a Referential, the data controller may take the position that there is no high residual risk and no need to seek prior consultation for the processing from the CNIL. If the data processing does not fully comply with the Referential, the data controller should assess the level of residual risk and the need to consult the CNIL. The Guidelines note that the CNIL may request DPIAs in case of inspections.
CNIL’s List of Processing Operations Requiring a DPIA
The CNIL previously submitted a draft list of processing operations requiring a DPIA to the EDPB for its opinion. The CNIL adopted its final list on October 11, 2018, based on that opinion. The final list includes 14 types of processing operations for which a DPIA is mandatory. The CNIL provided concrete examples for each type of processing operation, including:
- processing operations for the purpose of systematically monitoring the employees’ activities, such as the implementation of data loss prevention tools, CCTV systems recording employees handling money, CCTV systems recording a warehouse stocking valuable items in which handlers are working, digital tachograph installed in road freight transport vehicles, etc.;
- processing operations for the purpose of reporting professional concerns, such as the implementation of a whistleblowing hotline;
processing operations involving profiling of individuals that may lead to their exclusion from the benefit of a contract or to the contract suspension or termination, such as processing to combat fraud of (non-cash) means of payment;
- profiling that involves data coming from external sources, such as a combination of data operated by data brokers and processing to customize online ads;
- processing of location data on a large scale, such as a mobile app that enables to collect users’ geolocation data, etc.
The CNIL’s list is non-exhaustive and may be regularly reviewed, depending on the CNIL’s assessment of the “high risks” posed by certain processing operations.
The CNIL is expected to soon publish its list of processing operations for which a DPIA is not required.
On October 23, 2018, the parties in the Yahoo! Inc. (“Yahoo!”) Customer Data Security Breach Litigation pending in the Northern District of California and the parties in the related litigation pending in California state court filed a motion seeking preliminary approval of a settlement related to breaches of the company’s data. These breaches were announced from September 2016 to October 2017 and collectively impacted approximately 3 billion user accounts worldwide. In June 2017, Yahoo! and Verizon Communications Inc. had completed an asset sale transaction, pursuant to which Yahoo! became Altaba Inc. (“Altaba”) and Yahoo!’s previously operating business became Oath Holdings Inc. (“Oath”). Altaba and Oath have each agreed to be responsible for 50 percent of the settlement.
Under the terms of the agreement, Yahoo!, through its successor in interest, Oath Holdings Inc., has agreed to enhance its business practices to improve the security of its users’ personal information stored on its databases. Yahoo! will also pay for a minimum of two years of credit monitoring services to protect settlement class members from future harm, as well as establish a $50 million settlement fund to provide an alternative cash payment for those who verify they already have credit monitoring or identity protection. The settlement fund will also cover demonstrated out-of-pocket losses, including loss of time, and payments to Yahoo! users who paid for advertisement-free or premium Yahoo! Mail services and those who paid for Aabaco Small Business services, which included business email services. The motion for approval is currently before the court, which has scheduled a hearing for November 29, 2018, on the matter.
On November 1, 2018, Senator Ron Wyden (D-Ore.) released a draft bill, the Consumer Data Protection Act, that seeks to “empower consumers to control their personal information.” The draft bill imposes heavy penalties on organizations and their executives, and would require senior executives of companies with more than one billion dollars per year of revenue or data on more than 50 million consumers to file annual data reports with the Federal Trade Commission. The draft bill would subject senior company executives to imprisonment for up to 20 years or fines up to $5 million, or both, for certifying false statements on an annual data report. Additionally, like the EU General Data Protection Regulation, the draft bill proposes a maximum fine of 4% of total annual gross revenue for companies that are found to be in violation of Section 5 of the FTC Act.
The draft bill also proposes to grant the FTC authority to write and enforce privacy regulations, to establish minimum privacy and cybersecurity standards, and to create a national “Do Not Track” system that would allow consumers to prevent third-party companies from tracking internet users by sharing or selling data and targeting advertisements based on their personal information.
Senator Wyden stated, “My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules.”
Data theft is inarguably big business for hackers. This has been proven time and time again when big-name companies and their customers are involved in a data breach. As these instances appear to take place more often, and the number of stolen or compromised files continues to rise, it’s worth looking into exactly what hackers do with this information after they’ve put so much effort into stealing it.
While some data breaches involve low-hanging fruit – including default passwords and other sub-standard data protection measures – other attacks include increasingly sophisticated cybercriminal activity, backed by in-depth social engineering and research into potential targets. Thanks to these efforts, more than 2.6 billion records were stolen or compromised in 2017, a staggering 88 percent rise from the amount of data hackers made off with in 2016, according to Information Age.
But what takes place after a successful breach and data exfiltration? With all of this information in hand, where do hackers turn next to generate a profit?
Type of data dictates price, post-theft malicious activity
As Trend Micro research shows, the process that stolen data goes through after the initial breach depends largely upon the type of data and from what industry it was stolen.
Personally identifiable information (PII) can include a whole host of different elements and is stored by many brands to support customer accounts and personalization. Researchers discovered that once hackers bring this information to underground markets, it can be used to support identity fraud, the creation of counterfeit accounts, illicit money transfers, the launch of spam and phishing attacks, and even blackmail, extortion or hacktivism.
Let’s take a look at the ways in which other types of stolen data can be used once hackers gather it and bring it to underground marketplaces:
One theft leads to another
A main motivation of hackers is to make off with as much stolen information as possible. This thought process is applied not only to data breaches of specific companies, but also of the data belonging to individual users as well.
“More than 2.6 billion records were stolen or compromised in 2017.”
Take stolen account credentials, for example. A hacker will often leverage a stolen username and password to support further malicious activity and data theft in the hopes of compromising even more personal information.
“Theft of user credentials might even be more dangerous than PII, as it essentially exposes the victim’s online accounts to potential malicious use,” Trend Micro researches pointed out. “Email is often used to verify credentials and store information from other accounts, and a compromised email account can lead to further instances of fraud and identity theft.”
In such instances, a hacker can utilize stolen account credentials to fraudulently access an individual’s email. This may provide the cybercriminal with an email that includes a credit card invoice, giving them even more information for theft, and even the potential to steal, use or sell the victim’s credit card details for further fraud.
What’s more, as Trend Micro researchers noted, certain types of data are often interrelated, and the theft of one set of data often means the compromise of another, connected set. With health care files, for instance, a health care provider may store not only a patient’s medical history, but also their payment information as well. In this way, a breach of the provider could result not only in the exposure of medical details, but patient financial information as well.
What is data worth on underground marketplaces?
As Trend Micro’s interactive infographic shows, there are several different underground marketplaces existing all over the world, and the amount of profit hackers are able to generate depends on where they sell stolen information and the type of details their haul includes.
Experian data fro 2018 shows how profits for certain types of data can quickly add up for hackers, including for assets like:
Hackers also engage in data bundling, where individual pieces of stolen information are linked and packaged together, and then sold in a premium bundle for a higher price. These more complete, fraudulent profiles can include an array of information, including a victim’s name, age, address, birth date, Social Security number, and other similar information.
Working to prevent data theft
As the profit totals hackers can generate from stolen data continues to rise, it’s imperative that businesses and individual users alike take the proper precautions to safeguard their sensitive information.
This includes replacing default security measures with more robust protections, including strong passwords and multi-factor authentication, where applicable. Organizations should also limit access to especially sensitive information and databases to only those authorized users that need to utilize this data.
User education can also be a considerable advantage in better preventing information left. Users that are aware of current threats and know not to click on suspicious links or open emails from unknown senders can represent an additional layer of security against unauthorized access and cybercriminal activity.
To find out more about how to improve data prevention efforts within your organization, connect with the experts at Trend Micro today.
The post Information security: How Hackers Leverage Stolen Data for Profit appeared first on .
Let’s talk about it
October brought Social-Engineer to the SEVillage at DerbyCon 8.0 – Evolution, SEORG’s final SEVillage for the year, and WOW, was it an AMAZING DerbyCon. Ryan and Colin arrived Tuesday to set up shop and stuff many padfolios to prepare for their OSINT class that ran over Wednesday and Thursday. The OSINT class was Social-Engineer’s largest class EVER and it sold out in TWELVE SECONDS. Yes. You read that correctly. Our largest class sold out in 12 seconds. The students loved it, and one team even finished the final hands-on challenge in just over an hour when it usually takes multiple hours. A second team slid past the finish line in the nick of time, just before class ended on Thursday.
After class, the rest of the team rolled into Louisville, KY where DerbyCon was held at the Marriott downtown, instead of the Hyatt, for the first time. Our amazing volunteers and staff gathered together to set up the village and prep for the amazing few days to come.
Vishing data and the SECTF – Friday, October 4, 2018
Friday started for SEORG at noon when Cat Murdock and Chris Hadnagy took the Track 1 stage to present on Social-Engineer’s last-three years’ of vishing data in their speech “IRS, HR, Microsoft and your Grandma: What they all have in common.”
Cat gets psyched about data
Did you know that Mondays are the hardest day to compromise targets via vishing by a HUGE percentage?!? On Monday, social engineers are looking at a 29% compromise ratio compared to a 58%-65% compromise ratio any other day of the week. Apparently, employees hit the ground running on Mondays, are fresh off the weekend, and ready to secure their information from SEs.
Chris and Cat drop some data knowledge
That one-time Cat stole Dave’s hat but everyone got iced anyway
After the speech, the SEVillage team raced back to launch the 2nd SECTF at DerbyCon. The room was PACKED, with audience members sitting on the floor and lining the walls.
A completely packed room awaited the SECTF at DerbyCon
This year, the targets featured were large energy companies including Halliburton, Phillips 66, Devon Energy, Noble Energy, and Sunoco. While these targets were particularly challenging, and some even had systems that had to ethically be avoided for competition’s sake, it was one of the most entertaining SECTFs to date.
DEF CON’s 2nd place winner and always amazing audience member – Rachel Tobac
All the contestants were able to get targets on the phone and elicit many flags. The competition was SO fierce, the difference between the first and second place winner was only a single flag, making for a great competition. In the end, Krittika’s amazing reporting and calls won her the first-place trophy. This means that all the winners of the SECTF prizes this year were women!!! Get it, ladies!
Our DerbyCon 1st place winner, Krittika, Answering some Q&A after calls
The first competitor started the afternoon off right! Soooo many flags!
This sweet SECTF trophy finally found its forever home!
Can you fool the Polygraph, Mission SE Impossible,and Ethics– Saturday, October 5
Saturday at Derby is always an amazing day, as it starts off with the incredibly unique “Can you fool the Polygraph” challenge. Our reigning champion from 2017 began as the first competitor in this competition.
Reigning champ defends his title!
Contestants had to answer extremely uncomfortable questions while attempting to trick the polygraph machine, which has sensors measuring reactions on the chest, fingers, and even your butt. Questions ran along the lines of, “have you ever taken credit for a coworker’s accomplishments?” As well as, “do you regularly urinate in the shower?” Ultimately, our ferocious, and possibly psycho/sociopathic, competitors ended in a three-way tie!! Whaaatt….
With game faces like this, the tie was not surprising
Clearly, we couldn’t end in a tie. So, our amazing polygraph examiner created a tie breaker for us on the spot! Thanks, Jacob. The tie breaker was having the contestants answer “no” to the question, “Is it <insert day of the week here>?” Each contestant was asked five days of the week, including “Saturday,” the day the competition occurred, and they had to answer “no” to each objective question. The individual who lied the best won!
CONGRATS TO OUR WINNER SCOTT!!!
The most convincing liar of them all – Well done, Scott!
After a brief lunch break, the Village rallied for Mission SE Impossible, a staged “escape room” type competition where competitors have to shim themselves out of handcuffs and leg cuffs, pick a lock, analyze microexpressions, and traverse a laser grid produced by tiny sharks with lasers on their freakin’ heads.
No pressure or anything, but I hope he hustles with all those people watching…
Will he break free?!?! Spoiler alert – he did.
The SEVillage is family friendly, and this kid ROCKED it!
Disclaimer: No sharks were harmed in the making of MSI
Super sweet lasers in the HOUSE
Commitment to dodging those laser sharks
Our winner, squeezing through lasers on his way to victory
Ultimately, MSI ended with our winner, Rick, slamming the competition by finishing in RECORD time at 59 seconds. CONGRATULATIONS, RICK!!!!
Once MSI wrapped up, we only had one SEVillage activity remaining; a panel on Ethics in Social Engineering featuring Jamison Scheeres, Chris Silvers, Rachel Tobac, Grifter, and Chris Hadnagy. This panel was inspired by our recently released Social Engineering Code of Ethics, as, after its release, it quickly became a community tool and topic. It was truly wonderful to see a packed house looking to discuss ethics in our work from 6-8PM on a Saturday.
Full house for the ethics panel
The discussion was amazing, all viewpoints and questions were compelling and deep. Ultimately the community is made stronger when we can have tough conversations like these, where we really dig into thinking about where the tactics we use can take an emotional toll on targets while still being a necessary precaution to protect against malicious actors. A full recording of this panel is available here. #NotAPhish
The participants of the Ethics in Social Engineering Panel, Jamison, Chris S, Rachel, Grifter, and Chris H
Jamison dropping some deep thoughts
Wrap up – Sunday, October 6, 2018
Sunday, the team packed up the village and wearily found brunch in Louisville before heading to closing ceremonies, officially wrapping up the SEVillage at DerbyCon as well as all SEVillages for 2018. The weekend was truly an epic con, and we are always so grateful to be able to attend. We could not do it without our sponsor, Red Sky, or our amazing team. A huge thanks to Jim, Kris, Chris, Hannah, Evan, Spencer, Colin, Ryan, Cat, and Chris H – the weekend would literally not be possible without these wonderful individuals.
Colin manning that swag booth!
These are some great people!
Thank you all and be looking for the SECTF report that dives into the data from all our 2018 SECTF competitions!! The webinar discussing the report will be at 2PM ET on November 28. You can sign up now and don’t forget to mark your calendars!
We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.
In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.
Click to read the full article: Improve Security by Thinking Beyond the Security Realm
The language of cybersecurity evolves in step with changes in attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware—which is one of my passions—but also because of its knack for agitating many security practitioners.
I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that such malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”
Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. Today, when I look at the ways in which malware bypasses detection, the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.
Fileless was synonymous with in-memory until around 2014.
The adversary’s challenge with purely in-memory malware is that disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data further explained that Poweliks infected systems by using a boobietrapped Microsoft Word document.
The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows, such as the registry, to maintain persistence.
However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods fileless malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations about our ability to defend against its underlying tactics.
Here’s my perspective on the methods that comprise modern fileless attacks:
- Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
- Malicious Scripts: They can interact with the OS without the restrictions that some applications, such as web browsers, might impose. Scripts are harder for anti-malware tools to detect and control than compiled executables. In addition, they offer a opportunity to split malicious logic across several processes.
- Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. These tools allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
- Malicious Code in Memory: Memory injection abuses features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap their malware into scripts, documents or other executables, extracting payload into memory during runtime.
While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection includes at least some fileless capabilities. Such techniques allow adversaries to operate in the periphery of anti-malware software. The success of such attack methods is the reason for the continued use of term fileless in discussions among cybersecurity professionals.
Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss evasive threats that avoided the file system and misused OS features. For a deeper dive into this topic, read the following three articles upon which I based this overview:
I had not seen this interesting letter (August 27, 2018) from the House Energy and Commerce Committee to DHS about the nature of funding and support for the CVE.
This is the sort of thoughtful work that we hope and expect government departments do, and kudos to everyone involved in thinking about how CVE should be nurtured and maintained.
Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.
Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.Click to read the full article: Convergence is the Key to Future-Proofing Security
...Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Microsoft is no longer content to simply delegate endpoint security on Windows to other software vendors. The company has released, fine-tuned or rebranded multiple security technologies in a way that will have lasting effects on the industry and Windows users. What is Microsoft’s endpoint security strategy and how is it evolving?
Microsoft offers numerous endpoint security technologies, most of which include “Windows Defender” in their name. Some resemble built-in OS features (e.g., Windows Defender SmartScreen), others are free add-ons (e.g., Windows Defender Antivirus), while some are commercial enterprise products (e.g., the EDR component of Windows Defender Advanced Threat Protection). I created a table that explains the nature and dependencies of these capabilities in a single place. Microsoft is in the process of unifying these technologies under the Windows Defender Advanced Threat Protection branding umbrella—the name that originally referred solely to the company’s commercial incident detection and investigation product.
Microsoft’s approach to endpoint security appears to be pursuing the following 3 objectives:
- Protect the OS through baseline security measures for users of modern hardware and Windows versions. This includes safeguarding the integrity of core OS components from bootkits (Windows Defender System Guard); running Microsoft’s browsers in a hypervisor-enforced sandbox (Windows Defender Application Guard); and implementing exploit mitigation (Windows Defender Exploit Guard: Exploit Protection). Providing a robust operating environment for its users has become too important to delegate such tasks to third-parties.
- Motivate other vendors to innovate beyond the commodity security controls that Microsoft offers for its modern OS versions. Windows Defender Antivirus and Windows Defender Firewall with Advanced Security (WFAS) on Windows 10 are examples of such tech. Microsoft has been expanding these essential capabilities to be on par with similar features of commercial products. This not only gives Microsoft control over the security posture of its OS, but also forces other vendors to tackle the more advanced problems on the basis of specialized expertise or other strategic abilities.
- Expand the revenue stream from enterprise customers. To centrally manage Microsoft’s endpoint security layers, organizations will likely need to purchase System Center Configuration Manager (SCCM) or Microsoft Intune. Obtaining some Microsoft’s security technologies, such as the EDR component of Windows Defender Advanced Threat Protection, requires upgrading to the high-end Windows Enterprise E5 license. By bundling such commercial offerings with other products, rather than making them available in a standalone manner, the company motivates customers to shift all aspects of their IT management to Microsoft.
In pursuing these objectives, Microsoft developed the building blocks that are starting to resemble features of commercial Endpoint Protection Platform (EPP) products. The resulting solution is far from perfect, at least at the moment:
- Centrally managing and overseeing these components is difficult for companies that haven’t fully embraced Microsoft for all their IT needs or that lack expertise in technologies such as Group Policy.
- Making sense of the security capabilities, interdependencies and licensing requirements is challenging, frustrating and time-consuming.
- Most of the endpoint security capabilities worth considering are only available for the latest versions of Windows 10 or Windows Server 2016. Some have hardware dependencies are incompatible with older hardware.
- Several capabilities have dependencies that are incompatible with other products. For instance, security features that rely on Hyper-V prevent users from using the VMware hypervisor on the endpoint.
- Some technologies are still too immature or impractical for real-world deployments. For example, using my Windows 10 system after enabling the Controlled folder access feature became unbearable after a few days.
- The layers fit together in an awkward manner at times. For instance, Microsoft provides two app whitelisting technologies—Windows Defender Application Control (WDAC) and AppLocker—that overlap in some functionality.
While infringing on the territory traditionally dominated by third-parties on the endpoint, Microsoft leaves room for security vendors to provide value and work together with Microsoft’s security technologies. For example:
- Microsoft created the Antimalware Scan Interface (AMSI) for integrations. For instance, when a third-party product stops a threat not detected by Windows Defender Antivirus, the product can use AMSI to notify Microsoft’s tech about the event.
- The company also announced the Microsoft Intelligent Security Association. Members of this invitation-only club can share threat intel by way of Microsoft’s Intelligent Security Graph and collaborate in other unstated ways.
Some of Microsoft’s endpoint security technologies still feel disjointed. They’re becoming less so, as the company fine-tunes its approach to security and matures its capabilities. Microsoft is steadily guiding enterprises towards embracing Microsoft as the de facto provider of IT products. Though not all enterprises will embrace an all-Microsoft vision for IT, many will. Endpoint security vendors will need to crystallize their role in the resulting ecosystem, expanding and clarifying their unique value proposition. (Coincidentally, that’s what I’m doing at Minerva Labs, where I run product management.)
I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute. Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.
A Backdoor with a Backdoor
To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.
Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:
“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”
Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server irc.slim.org.au that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.
Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.
You Are an Idiot
The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:
When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:
At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.
The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.
As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course. It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.
Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.
Personalized Porn Extortion Scam
Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:
“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?
I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”
The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.
The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.
Data Breach Lawsuit Scam
In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:
“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”
The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.
What to Do?
If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.
Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.
To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:
- When Targeted Attacks Aren’t Targeted: The Magic of Cold Reading
- How the Scarcity Principle is Used in Online Scams and Attacks
- A Close Look at PayPal Overpayment Scams That Target Craigslist Sellers
If you’re in the business of safeguarding data and the systems that process it, what do you call your profession? Are you in cybersecurity? Information security? Computer security, perhaps? The words we use, and the way in which the meaning we assign to them evolves, reflects the reality behind our language. If we examine the factors that influence our desire to use one security title over the other, we’ll better understand the nature of the industry and its driving forces.
Until recently, I’ve had no doubts about describing my calling as an information security professional. Yet, the term cybersecurity is growing in popularity. This might be because the industry continues to embrace the lexicon used in government and military circles, where cyber reigns supreme. Moreover, this is due to the familiarity with the word cyber among non-experts.
When I asked on Twitter about people’s opinions on these terms, I received several responses, including the following:
- Danny Akacki was surprised to discover, after some research, that the origin of cyber goes deeper than the marketing buzzword that many industry professionals believe it to be.
- Paul Melson and Loren Dealy Mahler viewed cybersecurity as a subset of information security. Loren suggested that cyber focuses on technology, while Paul considered cyber as a set of practices related to interfacing with adversaries.
- Maggie O’Reilly mentioned Gartner’s model that, in contrast, used cybersecurity as the overarching discipline that encompasses information security and other components.
- Rik Ferguson also advocated for cybersecurity over information security, viewing cyber as a term that encompasses muliple components: people, systems, as well as information.
- Jessica Barker explained that “people outside of our industry relate more to cyber,” proposing that if we want them to engage with us, “we would benefit from embracing the term.”
In line with Danny’s initial negative reaction to the word cyber, I’ve perceived cybersecurity as a term associated with heavy-handed marketing practices. Also, like Paul, Loren, Maggie and Rik, I have a sense that cybersecurity and information security are interrelated and somehow overlap. Jessica’s point regarding laypersons relating to cyber piqued my interest and, ultimately, changed my opinion of this term.
There is a way to dig into cybersecurity and information security to define them as distinct terms. For instance, NIST defines cybersecurity as:
“Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”
Compare that description to NIST’s definition of information security:
“The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.”
From NIST’s perspective, cybersecurity is about safeguarding electronic communications, while information security is about protecting information in all forms. This implies that, at least according to NIST, information security is a subset of cybersecurity. While this nuance might be important in some contexts, such as regulations, the distinction probably won’t remain relevant for long, because of the points Jessica Barker raised.
Jessica’s insightful post on the topic highlights the need for security professionals to use language that our non-specialist stakeholders and people at large understand. She outlines a brief history that lends credence to the word cyber. She also explains that while most practitioners seem to prefer information security, this term is least understood by the public, where cybersecurity is much more popular. She explains that:
“The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.”
Technology and the role it plays in our lives continues to change. Our language evolves with it. I’m convinced that the distinction between cybersecurity and information security will soon become purely academic and ultimately irrelevant even among industry insiders. If the world has embraced cyber, security professionals will end up doing so as well. While I’m unlikely to wean myself off information security right away, I’m starting to gradually transition toward cybersecurity.
When cybersecurity professionals communicate with regular, non-technical people about IT and security, they often use language that virtually guarantees that the message will be ignored or misunderstood. This is often a problem for information security and privacy policies, which are written by subject-matter experts for people who lack the expertise. If you’re creating security documents, take extra care to avoid jargon, wordiness and other issues that plague technical texts.
To strengthen your ability to communicate geeky concepts in plain English, consider the following exercise: Take a boring paragraph from a security assessment report or an information security policy and translate it into a sentence that’s no longer than 15 words without using industry terminology. I’m not suggesting that the resulting statement should replace the original text; instead, I suspect this exercise will train you to write more plainly and succinctly.
For example, I extracted and slightly modified a few paragraphs from the Princeton University Information Security Policy, just so that I could experiment with some public document written in legalese. I then attempted to relay the idea behind each paragraph in the form of a 3-line haiku (5-7-5 syllables per line):
This Policy applies to all Company employees, contractors and other entities acting on behalf of Company. This policy also applies to other individuals and entities granted use of Company information, including, but not limited to, contractors, temporary employees, and volunteers.
If you can read this,
you must follow the rules that
are explained below.
When disclosing Confidential information, the proposed recipient must agree (i) to take appropriate measures to safeguard the confidentiality of the information; (ii) not to disclose the information to any other party for any purpose absent the Company’s prior written consent.
Don’t share without a
contract any information
All entities granted use of Company Information are expected to: (i) understand the information classification levels defined in the Information Security Policy; (ii) access information only as needed to meet legitimate business needs.
Know your duties for
safeguarding company info.
Use it properly.
By challenging yourself to shorten a complex concept into a single sentence, you motivate yourself to determine the most important aspect of the text, so you can better communicate it to others. This approach might be especially useful for fine-tuning executive summaries, which often warrant careful attention and wordsmithing. This is just one of the ways in which you can improve your writing skills with deliberate practice.
Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.
The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.
Product Management at a Large Firm
In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.
Access to Customers
Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.
Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.
Access to Expertise
Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.
Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)
Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.
What It’s Like at a Large Firm
In a nutshell, these are the characteristics inherent to product management roles at large companies:
- An established sales organization, which provides access to customers
- Potentially-conflicting priorities and incentives with groups and individuals within the organization
- Rigid organizational structure and bureaucracy
- Potentially-easier access to funding for sophisticated projects and complex products
- Possibly-easier access to the needed expertise
- Well-defined career development roadmap
I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!
I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.
Product Management in a Startup
One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.
What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.
Experimenting With Capabilities
Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.
Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.
In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.
People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)
Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.
However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.
What It’s Like at a Startup
Here are the key aspects of performing product management at a startup:
- Ability and need to iterate faster to get feedback
- Willingness and need to take higher risks
- Lower bureaucratic burden and red tape
- Much harder to reach customers
- Often fewer resources to deliver on the roadmap
- Fluid designation of responsibilities
I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.
There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.
Which Setting is Best for You?
Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.
As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.
Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."Click to read the full article: New World, New Rules: Securing the Future State.
This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.
Responsibilities of a Product Manager
- Determine what to build, not how to build it.
- Envision the future pertaining to product domain.
- Align product roadmap to business strategy.
- Define specifications for solution capabilities.
- Prioritize feature requirements, defect correction, technical debt work and other development efforts.
- Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
- Participate in the handling of issue escalations.
- Sometimes take on revenue or P&L responsibilities.
Defining Product Capabilities
- Understand gaps in the existing products within the domain and how customers address them today.
- Understand your firm’s strengths and weaknesses.
- Research the strengths and weaknesses of your current and potential competitors.
- Define the smallest set of requirements for the initial (or next) release (minimum viable product).
- When defining product requirements, balance long-term strategic needs with short-term tactical ones.
- Understand your solutions key benefits and unique value proposition.
Strategic Market Segmentation
- Market segmentation often accounts for geography, customer size or industry verticals.
- Devise a way of grouping customers based on the similarities and differences of their needs.
- Also account for the similarities in your capabilities, such as channel reach or support abilities.
- Determine which market segments you’re targeting.
- Understand similarities and differences between the segments in terms of needs and business dynamics.
- Consider how you’ll reach prospective customers in each market segment.
Engagement with the Sales Team
- Understand the nature and size of the sales force aligned with your product.
- Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
- Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
- Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
- Assess what other products are “competing” for the sales team’s attention, if applicable.
- Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
- Gather sales’ negative and positive feedback regarding the product.
- Understand which market segments and use-cases have gained the most traction in the product’s sales.
The Pricing Model
- Understand the value that customers in various segments place on your product.
- Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
- Compute your ongoing costs related to maintaining the product and supporting its users.
- Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
- Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
- Define the approach to offering volume pricing discounts, if applicable.
- Define the model for compensating the sales team, including resellers, if applicable.
- Establish the pricing schedule, setting the priced based on perceived value.
- Account for the minimum desired profit margin.
Product Delivery and Operations
- Understand the intricacies of deploying the solution.
- Determine the effort required to operate, maintain and support the product on an ongoing basis.
- Determine for the technical steps, personnel, tools, support requirements and the associated costs.
- Document the expectations and channels of communication between you and the customer.
- Establish the necessary vendor relationship for product delivery, if necessary.
- Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
- Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
- Obtain the appropriate audits and certifications.
Product Management at Startups
- Ability and need to iterate faster to get feedback
- Willingness and need to take higher risks
- Lower bureaucratic burden and red tape
- Much harder to reach customers
- Often fewer resources to deliver on the roadmap
- Fluid designation of responsibilities
Product Management at Large Firms
- An established sales organization, which provides access to customers
- Potentially-conflicting priorities and incentives with groups and individuals within the organization
- Rigid organizational structure and bureaucracy
- Potentially-easier access to funding for sophisticated projects and complex products
- Possibly-easier access to the needed expertise
- Well-defined career development roadmap
Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.
CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.
Jan, why did you and your team decide to join CrowdStrike?
Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.
What role did the free version of your product, available at hybrid-analysis.com, play in the company’s evolution?
A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.
The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.
What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.
I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.
What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?
Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.
In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.
Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)
It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.
The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.
However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.
I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.
We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.
I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.
Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.
The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.
Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.
100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.
This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.
Overview of the Code Analysis Process
- Examine static properties of the Windows executable for initial assessment and triage.
- Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
- Perform automated and manual behavioral analysis to gather additional details.
- If relevant, supplement our understanding by using memory forensics techniques.
- Use a disassembler for static analysis to examine code that references risky strings and API calls.
- Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
- If appropriate, unpack the code and its artifacts.
- As your understanding of the code increases, add comments, labels; rename functions, variables.
- Progress to examine the code that references or depends upon the code you’ve already analyzed.
- Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.
Common 32-Bit Registers and Uses
|EAX||Addition, multiplication, function results|
|ECX||Counter; used by LOOP and others|
|EBP||Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)|
|ESP||Points to the current “top” of the stack; changes via PUSH, POP, and others|
|EIP||Instruction pointer; points to the next instruction; shellcode gets it via call/pop|
|EFLAGS||Contains flags that store outcomes of computations (e.g., Zero and Carry flags)|
|FS||F segment register; FS points to SEH chain, FS[0x30] points to the PEB.|
Common x86 Assembly Instructions
|mov EAX,0xB8||Put the value 0xB8 in EAX.|
|push EAX||Put EAX contents on the stack.|
|pop EAX||Remove contents from top of the stack and put them in EAX .|
|lea EAX,[EBP-4]||Put the address of variable EBP-4 in EAX.|
|call EAX||Call the function whose address resides in the EAX register.|
|add esp,8||Increase ESP by 8 to shrink the stack by two 4-byte arguments.|
|sub esp,0x54||Shift ESP by 0x54 to make room on the stack for local variable(s).|
|xor EAX,EAX||Set EAX contents to zero.|
|test EAX,EAX||Check whether EAX contains zero, set the appropriate EFLAGS bits.|
|cmp EAX,0xB8||Compare EAX to 0xB8, set the appropriate EFLAGS bits.|
Understanding 64-Bit Registers
- EAX→RAX, ECX→RCX, EBX→RBX, ESP→RSP, EIP→RIP
- Additional 64-bit registers are R8-R15.
- RSP is often used to access stack arguments and local variables, instead of EBP.
- |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
________________________________________________|||||||||||||||| R8W (16 bits)
________________________________________________________|||||||| R8B (8 bits)
Passing Parameters to Functions
|arg0||[EBP+8] on 32-bit, RCX on 64-bit|
|arg1||[EBP+0xC] on 32-bit, RDX on 64-bit|
|arg2||[EBP+0x10] on 32-bit, R8 on 64-bit|
|arg3||[EBP+14] on 32-bit, R9 on 64-bit|
Decoding Conditional Jumps
|JA / JG||Jump if above/jump if greater.|
|JB / JL||Jump if below/jump if less.|
|JE / JZ||Jump if equal; same as jump if zero.|
|JNE / JNZ||Jump if not equal; same as jump if not zero.|
|JGE/ JNL||Jump if greater or equal; same as jump if not less.|
Some Risky Windows API Calls
- Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
- Dynamic DLL loading: LoadLibrary, GetProcAddress
- Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
- Data stealing: GetClipboardData, GetWindowText
- Keylogging: GetAsyncKeyState, SetWindowsHookEx
- Embedded resources: FindResource, LockResource
- Unpacking/self-injection: VirtualAlloc, VirtualProtect
- Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
- Execute a program: WinExec, ShellExecute, CreateProcess
- Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile
Additional Code Analysis Tips
- Be patient but persistent; focus on small, manageable code areas and expand from there.
- Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
- Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
- If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
- When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).
Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.
Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.
In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.
I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.
Ransomware has become an increasingly serious threat. Cryptowall, TeslasCrypt and Locky are just some of the ransomware variants that infected large numbers of victims. Petya is the newest strain and the most devious among them.
Petya will not only encrypt files but it will make the system completely useless, leaving the victim no choice but to pay for the ransom, and it will encrypt filesystem’s Master File Table, which leaves the operating system unable to load. MFT is an essential file in NTFS file system. It contains every file record and directory on NTFS logical volume. Each record contains all the particulars that the operating system need to boot properly.
Like any other malware, Petya is widely distributed via a job application spear-phishing email that comes with a Dropbox link luring the victim by claiming the link contains self-extracting CV; in fact, it contains self-extracting executable that would later unleash its malicious behavior.
Petya ransomware has two infection stages. The first stage is MBR infection and encryption key generation, including the decryption code used in ransom messages. The second stage is MFT encryption.
First Stage of Encryption
An MBR infection is made through straightforward \\.\PhysicalDrive0 manipulation with the help of DeviceIOControl API. It first retrieves the physical location of the root drive \\.\c by sending IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code to the device driver. Then it sends the extended disk partition info of \\.\PhysicalDrive0 through IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code.
The dropper will encrypt the original MBR using XOR opcode and 0x37 and save it for later use. It will also create 34 disk sectors containing 0x37. Right after the 34 sectors are Petya’s MFT infecting code. Located on Sector 56 is the original encrypted MBR.
After the MBR infection, it will intentionally crash the system by triggering NTRaiseHardError. This will trigger BSOD and the system will start, which will cause the machine to load using the infected MBR.
Once we inspected the dumped image of the disk, we discovered it was showing a fake CHKDSK screen. We will also see the ransom message and ASCII skull art.
Second Infection Stage
The stage 2 infection code is written in 16-bit architecture, which uses BIOS interrupt calls.
Upon system boot up, it will load into memory Petya’s malicious code, which is located at sector 34. It will first determine if the system is already infected by checking the first byte at sector is 0x0. If not infected, it will display fake CHKDSK.
When someone sees the Figure 8, it means that the MFT table is already encrypted using salsa20 algorithm.
Petya Ransomware Page
The webpage for the victim to access their personal decryption key is protected against bots and contains information about when the Petya ransomware project was launched, warnings on what not to do when recovering files and an FAQ page. The page is surprisingly very user friendly and shows the days left before the ransom price will be doubled.
It also contains news feeds, including different blogs and news from AV companies warning about Petya.
They also provide a step-by-step process on how to pay the ransom, including instructions on how to purchase bitcoin. Support via web is included too in case the victim encounters problems in the transaction they’ve made. Petya’s ransom is a lot cheaper compared to other ransomware, too.
On Step 4 of the payment procedure, the “next” button is disabled until they’ve confirmed that they already received the payment.
Below is a shot of ThreatTrack’s ThreatSecure Network dashboard catching Petya. Tools like ThreatSecure can detect and disrupt attacks in real time.
While I believe that on the whole the CFAA and more urgently the DMCA need dramatic reforms if not to be flat-out dumped, I'm just not sure I'm completely on board with there this idea is going. I've discussed my displeasure for the CFAA on a few of our recent podcasts if you follow our Down the Security Rabbithole Podcast series, and I would likely throw a party if the DMCA were repealed tomorrow - but unlocking "research" broadly is dangerous.
There is no doubt in my mind that security research is critical in exposing safety and security issues in matters that affect the greater public good. However, let's not confuse legitimate research with thinly veiled extortion or a license to hack at will. We can all remember the incident Apple had where a hacker purportedly had exposed a flaw in their online forums, then to prove his point he exploited the vulnerability and extracted data of real users. All in the name of "research" right? I don't think so.
You see, what a recent conversation with Shawn Tuma has taught me is that under the CFAA we have one of these "I'll know it when I see it" conditions where a prosecuting attorney can choose to either go after someone, or look the other way if they believe they were acting in good faith and for the public good... or some such. This type of power makes me uncomfortable as it gives that prosecuting attorney way too much room. Room for what you ask? How about room to be swayed by a big corporation... I'm looking at you AT&T.
Let me lay out a scenario for you. Say you are a security professional interested in home automation and alarm systems. You purchase one, and begin to conduct research into the types of vulnerabilities one of these things is open to - since you'll be installing it in your home and all. You uncover some major design flaws, and maybe even a way to remotely disable the home alarm feature on thousands of units across the country. You want to notify the company, get them to fix the issue, and maybe get a little by-line credit for it. Only the company slaps a DMCA law suit on you for reverse engineering their product and you're in hot water. Clearly they have more money and attorneys than you do. Your choices are few - drop the research or face criminal prosecution. Odds are you're not even getting a choice.
In that scenario - it's clear that reforms are needed. Crystal clear, in fact.
The issue is we need to protect legitimate research from prosecutorial malfeasance while still allowing for laws to protect intellectual property and a company's security. So you see, the issue isn't as simple as opening up research, but much more subtle and deliberate.
How do we limit the law and protect legitimate research, while allowing for the protections companies still deserve? I think the answer lies in how we define a researcher. I propose that we require researchers to declare their research and its intent and draft ethical guidelines which can be agreed upon (and enforced on both ends) between the researcher and the organization being researched. There must be rules of engagement, and rules for "responsible and coordinated disclosure". The laws must be tweaked to shield the researched with declared intent and following the rules of engagement from being prosecuted by a company which is simply trying to skirt responsibility for safety, privacy and security. Furthermore, there must be provisions for matters that affect the greater good - which companies simply cannot opt out of.
Now, if you ask me if I believe this will happen any time soon, that's another matter entirely. Big companies will use their lobbying power to make sure this type of reform never happens, because it simply doesn't serve their self-interest. Having seen first-hand the inner workings of a large enterprise technology company - I know exactly how much profit is valued over security (or anything else, really). Profit now, and maybe no one will notice the big gaping holes later. That's just how it is in real life. But when public safety comes into play I think we will see a few major incidents where we have loss of life directly attributed to security flaws before we see any sort of reform. Of course when we do have serious incidents, they'll simply go after the hackers and shed any responsibility. That's just how these things work.
So in closing - I think there is a lot of work to be done here. First we need to more closely define and create formal understanding of security research. Once we're comfortable with that, we need to refine the CFAA and maybe get rid of the DMCA - to legitimize security research into the areas that affect public safety, privacy and security.
Here's a brief excerpt:
Mobile is the new black. Every major analyst group seems to have a different phrase for it but we all know that workforces are increasingly mobile and BYOD (Bring Your Own Device) is quickly spreading as the new standard. As the mobile access landscape changes and organizations continue to lose more and more control over how and where information is used, there is also a seismic shift taking place in the underlying mobile security models.
Mobile Device Management (MDM) was a great first response by an Information Security industry caught on its heels by the overwhelming speed of mobile device adoption. Emerging at a time when organizations were purchasing and distributing devices to employees, MDM provided a mechanism to manage those devices, ensure that rogue devices weren’t being introduced onto the network, and enforce security policies on those devices. But MDM was as intrusive to end-users as it was effective for enterprises.Continue Reading
Anyway, my plan is to take notes and blog or tweet about what I see. Of course, I'll primarily be looking at Identity and Access technologies, which is only a subset of Information Security. And I'll be looking for two things: Innovation and Uniqueness. If your company has a claim on either of those in IAM solutions, please try to catch my attention.
The two big components of the third platform are mobile and cloud. I'll talk about both.
A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!
Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.
So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.
Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.
As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.
The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.
There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)
I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.
The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).
And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.
Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.
One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.
While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)
By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.
Active Directory Migration ChallengesOver the past decade, Active Directory (AD) has grown out of control. It may be due to organizational mergers or disparate Active Directory domains that sprouted up over time, but many AD administrators are now looking at dozens of Active Directory forests and even hundreds of AD domains wondering how it happened and wishing it was easier to manage on a daily basis.
One of the top drivers for AD Migrations is enablement of new technologies such as unified communications or identity and access management. Without a shared and clearly articulated security model across Active Directory domains, it’s extremely difficult to leverage AD for authentication to new business applications or to establish the related business rules that may be based on AD attributes or security group memberships.
Domain consolidation is not a simple task. Whether you're moving from one platform to another, doing some AD security remodeling, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?
One of the biggest fears in Active Directory migration projects is that business users will lose access to their critical resources during the migration. To reduce the likelihood of that occurring, many project leaders choose to enable a dirty migration; they enable historical SIDs which carry old credentials and group memberships from the source domain and apply them to the new domain. Unfortunately, enabling historical SIDs proliferates one of the main challenges that initially drove the migration project. The dirty migration approach maintains the various security models that have been implemented over the years making AD difficult to manage and near impossible to understand who has what rights across the environment.
Clean Active Directory MigrationsThe alternative to a dirty migration is to disallow historical SIDs and thereby enable a clean migration where rights are applied as-needed in an easy-to-manage and well articulated security model. Security groups are applied on resources according to an intentional model that is defined up-front and permissions are limited to a least-privilege model where only those who require rights actually get them.
All consolidation or migration projects aren't the same. The motivations differ, the technologies differ, and the Active Directory organizational structure and assets differ wildly. Most solutions on the market provide point A to point B migrations of Active Directory assets. This type of migration often contributes to making the problem worse over time. There's nothing wrong with using an Active Directory tool to help you perform an AD forest or domain migration, but knowing which assets to move and how to structure or even restructure them in the target domain is critical.
Enabling a clean migration and transforming the Active Directory security model requires a few steps to be followed. It starts with assessment and cleanup of the source Active Directory environments. You should assess what objects are out there, how they’re being used, and how they’re currently organized. Are there dormant user accounts or unused computer objects? Are there groups with overlapping membership? Are there permissions that are unused or inappropriate? Are there toxic or high-risk conditions in the environment? This type of intelligence enables visibility into which objects you need to move, how they're structured, how the current domain compares to the target domain, and where differences exist in GPO policies, schema, and naming conventions. The dormant and unused objects as well as any toxic or high-risk conditions can be remediated so that those conditions aren’t propagated to the target environment.
Once the initial assessment and cleanup is complete, a gap-analysis should be performed to understand where the current state differs from the intended model. Where possible, the transformation should be automated. Security groups can be created, for example, based on historical user activity so that group membership is determined by actual need. This is a key requirement for numerous legal regulations.
The next step is to perform a deep scan into the Active Directory forests and domains that will be consolidated and look at server-level permissions and infrastructure across Active Directory, File Systems, Security Policies, SharePoint, SQL Server, and more. This enables the creation of business rules that will transform existing effective permissions into the target model while adhering to new naming conventions and group utilization. Much of this transformation should be automated to avoid human error and reduce effort.
Maintaining a Clean Active DirectoryOnce the migration or consolidation project is complete and adherence to the intended security model has been enforced, it’s vital that a program is in place to maintain Active Directory in its current state. There are a few capabilities that can help achieve this goal.
First, a mandatory periodic audit should be enforced. Security Group owners should confirm that groups are being used as-intended. Resource owners should confirm that the right people have the right level of access to their resources. Business managers should confirm that their people have access to the right resources. These reviews should be automated and tracked to ensure that these reviews are completely thoroughly and on-time.
Second, tools should be implemented that provide visibility into the environment answering questions as they come up. When a security administrator needs to see how a user is being granted rights to something they should perhaps not have, they’ll need tools that provide answers in a timely fashion.
Third, a system-wide scan should be conducted regularly to identify any toxic or high-risk conditions that occur over time. For example, if a user account becomes dormant, notification should be sent out according to business rules. Or if a group is nested within itself perhaps ten layers deep, you want an automated solution to discover that condition and provide related reporting.
Finally, to ensure adherence to Active Directory security policies, a real-time monitoring solution should be put in place to enforce rules, prevent unwanted changes via event blocking, and to maintain an audit trail of critical administrative activity.
Complete visibility across the entire Active Directory infrastructure enables a clean AD domain consolidation while making life easier for administrators, improving security, and enabling adoption of new technologies
About the AuthorMatt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.
Active Directory Event Monitoring ChallengesMonitoring and reporting on Active Directory accounts, security groups, access rights, administrative changes, and user behavior can feel like a monumental task. Event monitoring requires an understanding of which events are critical, where those events occur, what factors might indicate increased risk, and what technologies are available to capture those events.
Understanding which events to ignore is as important and knowing which are critical to capture. You don't need immediate alerts on every AD User or Group change which takes place but you want visibility into critical high-risk changes: Who is adding AD user accounts? ...adding a user to an administrative AD group? ...making Group Policy (GPO) changes?
Active Directory administrators face a complex challenge that requires visibility into events as well as infrastructure to ensure proper system functionality. A complete AD monitoring solution doesn't stop at user and group changes. It also looks at Domain Controller status: which services are running, disk space issues, patch levels, and similar operational and infrastructure needs. There are numerous technical requirements to get that level of detail.
AD administrators require full access in the environment which presents another set of challenges. How do you enable administrators to do their job while controlling certain high-risk activity such as snooping on sensitive data or accidentally making GPO changes to important security policies? Monitoring Active Directory effectively includes either preventing unintended activities through change blocking or deterring activities through visible monitoring and alerting.
Monitoring Active Directory EffectivelyEffective audit and monitoring solutions for Active Directory address the numerous challenges discussed above by providing a flexible platform that covers typical scenarios out-of-the-box without customization but also allows extensibility to accommodate the unique requirements of the environment.
Data collection is the cornerstone of any Active Directory monitoring and audit solution. Collection must be automated, reliable, and non-intrusive on the target environment. Data that can be collected remotely without agents should be. But, when requirements call for at-the-source monitoring, for example when you want to see WHO did it, what machine they came from, capture before-and-after values, or block certain activities, a real-time agent should be available to accommodate those needs. The data collection also needs to scale to the environment’s size and performance requirements.
Once data has been collected, both batch and real-time per-event analysis are required to meet common requirements. For example, you may want an alert on changes to administrative groups but you don’t want alerts on all group changes. Or you may want a report that highlights all empty groups or groups with improper nesting conditions. This analysis should provide intelligence out-of-the-box based on industry expertise and commonly requested reporting. But it should also enable unique business questions to be answered. Every organization uses Active Directory in unique ways and custom reporting is an extremely common requirement.
Finally, once data collection and analysis phases have been completed, AD monitoring solutions should provide a flexible reporting interface that provides access to the intelligence that has been cultivated. As with collection and analysis, the reporting functionality should include commonly requested reports with no customization but should also enable report customization and extensibility. Reporting should include web-accessible reports, search and filtering, access to the raw and post-analysis data, and email or other alerting.
An effective Active Directory monitoring solution provides deep insight on all things Active Directory. It should enable user, group and GPO change detection as well as reporting on anomalies and high-risk conditions. It should also provide deep analysis on users, groups, OUs, computer objects, and Active Directory infrastructure. Because the types of reports required by different teams (such as security and operations) may differ, it may be prudent to provide slightly different interfaces or report sets for the various intended audiences.
When real-time monitoring of Active Directory Users, Groups, OUs, and other changes (including activity blocking) are important, the solution should provide advanced filtering and response on nearly all Active Directory events as well as an audit trail of changes and attempts with all relevant information.
Benefits of Active Directory MonitoringThe three most common business drivers for Active Directory monitoring are improved security, improved audit response, and simplified administration. Active Directory audit and monitoring solutions make life easier for administrators while improving security across the network environment. This is especially important as AD becomes increasingly integrated into enterprise applications.
Some common use-cases include:
- Monitor Active Directory user accounts for create, modify and delete events. Capture the user account making the change along with the affected account information, changed attributes, time stamp, and more. This monitoring capability acts independent of the Security Event log and is non-reputable.
- Monitor Active Directory group memberships and provide reports and/or alerts in real time when memberships change on important groups such as the Domain Admins group.
- Report on failed attempts in addition to successful attempts. Filter on specific types of events and ignore others.
- Report on Active Directory dormant accounts, empty groups, unused groups, large groups, and other high-risk conditions to empower administrators with actionable information.
- Automate event response based on policy with email alerts, remediation processes, or record the event to a file or database.
About the AuthorMatt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.
It’s no secret that over the past decade, Active Directory has grown out of control across many organizations. It’s partly due to organizational mergers or disparate Active Directory domains that sprouted up over time, but you may find yourself looking at dozens or even hundreds of Active Directory domains and realize that it's time to consolidate. And it probably feels overpowering. But despite the effort in front of you, there’s an easy way and a right way.
Domain consolidation is not a simple task. Whether you're moving from one platform to another, trying to implement a new security model, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?
According to Gartner analyst Andrew Walls, “The allure of a single AD forest with a simple domain design is not fool’s gold. There are real benefits to be found in a consolidated AD environment. A shared AD infrastructure enables user mobility, common user provisioning processes, consolidated reporting, unified management of machines, etc.”
Walls goes on to discuss the politics, cost justification, and complexity of these projects noting that “An AD consolidation has to unite and rationalize the ID formats, password policy objects, user groups, group policy objects, schema designs and application integration methods that have grown and spread through all of the existing AD environments. At times, this can feel like spring cleaning at the Aegean stables. Of course, if you miss something, users will not be able to log in, or find their file shares, or access applications. No pressure.”
Walls offers advice on how to avoid some of the pain. “You fight proliferation of AD at every turn and realize that consolidation is not a onetime event. The optimal design for AD is a single domain within a single forest. Any deviation from this approach should be justified on the basis of operational requirements that a unified model cannot possibly support.”
What does this mean for you? Well, the most significant take-away from Walls’ advise is that it’s not a onetime event. AD Unification is an ongoing effort. You don’t simply move objects from point-A to point-B and then pack it in for the day. The easy way fails to meet the core objectives of an improved security model, simplified management, reduced cost, and a common provisioning process (think integration with Identity Management solutions).
If you take everything from three source domains and simply move it all to a target domain, you haven’t achieved any of the objectives other than now having a single Active Directory. There’s a good chance that your security model will remain fragmented, management will become more difficult, and your user provisioning processes will require additional logic to accommodate for the new mess. On a positive note, if this model is your intent, there are numerous solutions on the market that will help.
STEALTHbits, of course, embraces the right way. “Control through Visibility” is about improving your security posture and your ability to manage IT by increasing your visibility into the critical infrastructure.
If you'd like to learn more about the solution, you can start by reading the rest of this blog entry or contact STEALTHbits.
The value of information – the reason for information security
Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?
Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.