Category Archives: Information Security

Women in Information Security: Theresa Payton

Last time, I had fun speaking with my friend, red team-minded student/teacher Alana Staszczyszyn. This time, I had the privilege of speaking with cybersecurity and intelligence industry veteran Theresa Payton. She’s always had tons of responsibility. She went from the White House to start her own private sector firm, Fortalice Solutions. Kim Crawley: Hi, Theresa! […]… Read More

The post Women in Information Security: Theresa Payton appeared first on The State of Security.

Top tips for securing your financial institution

For financial business leaders it’s important to understand that cyber-security is now a key business enabler, rather than simply a part of the overall IT strategy.As anyone working in the

The post Top tips for securing your financial institution appeared first on The Cyber Security Place.

Matt Flynn: Information Security | Identity & Access Mgmt.: Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Click to read the full article: Improve Security by Thinking Beyond the Security Realm

Matt Flynn: Information Security | Identity & Access Mgmt.

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Click to read the full article: Improve Security by Thinking Beyond the Security Realm

The Language and Nature of Fileless Attacks Over Time

The language of cybersecurity evolves in step with attack and defense tactics. You can get a sense for such dynamics by examining the term fileless. It fascinates me not only because of its relevance to malware, but also because of its knack for agitating many security practitioners.

I traced the origins of “fileless” to 2001, when Eugene Kaspersky (of Kaskersky Labs) used it in reference to Code Red worm’s ability to exist solely in memory. Two years later, Peter Szor defined this term in a patent for Symantec, explaining that this form of malware doesn’t reside in a file, but instead “appends itself to an active process in memory.”

Eugene was prophetic in predicting that fileless malware “will become one of the most widespread forms of malicious programs” due to antivirus’ ineffectiveness against such threats. When I look at the ways in which malware bypasses detection nowadays (see below), the evasion techniques often fall under the fileless umbrella, though the term expanded beyond its original meaning.

Fileless was synonymous with in-memory until around 2014.

Purely in-memory malware disappears once the system restarts. In 2014, Kevin Gossett’s Symantec article explained how Powerliks malware overcame this limitation by using legitimate Windows programs rundll32.exe and powershell.exe to maintain persistence, extracting and executing malicious scripts from the registry. Kevin described this threat as “fileless,” because it avoided placing code directly on the file system. Paul Rascagnères at G Data mentioned that Poweliks infected systems by using a boobietrapped Microsoft Word document.

The Powerliks discussion, and similar malware that appeared afterwards, set the tone for the way fileless attacks are described today. Yes, fileless attacks strive to maintain clearly malicious code solely or mostly in memory. Also, they tend to involve malicious documents and scripts. They often misuse utilities built into the operating system and abuse various capabilities of Windows to maintain persistence, such as the registry.

However, the growing ambiguity behind the modern use of the term fileless is making it increasingly difficult to understand what specific methods malware uses for evasion. It’s time to disambiguate this word to hold fruitful conversations regarding our ability to defend against its underlying tactics.

Here’s my perspective on the methods that comprise modern fileless attacks:

  • Malicious Documents: They can act as flexible containers for other files. Documents can also carry exploits that execute malicious code. They can execute malicious logic that begins the infection and initiates the next link in the infection chain.
  • Malicious Scripts: They can interact with the OS without restrictions that some applications, such as web browsers, might impose. They are harder for anti-malware tools to detect and control than compiled executables. They offer a opportunity to split malicious logic across several processes.
  • Living Off the Land: Microsoft Windows includes numerous utilities that attackers can use to execute malicious code with the help of a trusted process. They allow adversaries to “trampoline” from one stage of the attack to another without relying on compiled malicious executables.
  • Malicious Code in Memory: Memory injection can abuse features of Microsoft Windows to interact with the OS even without exploiting vulnerabilities. Attackers can wrap compiled executables into scripts, extracting payload into memory during runtime.

While some attacks and malware families are fileless in all aspects of their operation, most modern malware that evades detection nowadays includes at least some fileless capabilities. Such techniques allow malware to operate in the peripheral vision of anti-malware software. The success of such attack methods is the reason for the continued use of the term in discussions among security vendors and enterprises.

Language evolves as people adjust the way they use words and the meaning they assign to them. This certainly happened to fileless, as the industry looked for ways to discuss attacks that avoided the file system and misused OS features to evade detection. For a deeper dive into this topic, read the following three articles upon which I based this overview:

What is Blockchain? Everything you need to know

Like much of the technology world, cryptocurrencies such as Bitcoin still rely on some form of database that are able to track large volumes of transactions and keep them secure.The

The post What is Blockchain? Everything you need to know appeared first on The Cyber Security Place.

California Enacts Blockchain Legislation

As reported on the Blockchain Legal Resource, California Governor Jerry Brown recently signed into law Assembly Bill No. 2658 for the purpose of further studying blockchain’s application to Californians. In doing so, California joins a growing list of states officially exploring distributed ledger technology.

Specifically, the law requires the Secretary of the Government Operations Agency to convene a blockchain working group prior to July 1, 2019. Under the new law, “blockchain” means “a mathematically secured, chronological and decentralized ledger or database.” In addition to including various representatives from state government, the working group is required to include appointees from the technology industry and non-technology industries, as well as appointees with backgrounds in law, privacy and consumer protection.

Under the new law, which has a sunset date of January 1, 2022, the working group is required to evaluate:

  • the uses of blockchain in state government and California-based businesses;
  • the risks, including privacy risks, associated with the use of blockchain by state government and California-based businesses;
  • the benefits associated with the use of blockchain by state government and California-based businesses;
  • the legal implications associated with the use of blockchain by state government and California-based businesses; and
  • the best practices for enabling blockchain technology to benefit the State of California, California-based businesses and California residents.

In doing so, the working group is required to seek “input from a broad range of stakeholders with a diverse range of interests affected by state policies governing emerging technologies, privacy, business, the courts, the legal community and state government.”

The working group is also tasked with delivering a report to the California Legislature by January 1, 2020, on the potential uses, risks and benefits of blockchain technology by state government and California businesses. Moreover, the report is required to include recommendations for amending relevant provisions of California law that may be impacted by the deployment of blockchain technology.

Infosecurity.US: Too Busy, Don’t Care; So Sorry, Not Sorry

via Lawrence Abrams, writing at Bleeping Computer, comes news of a the most recent Attorney's General - The Gathering, colaescing into a brilliant coterie of top Law Enforcement Officials for their individual States. In which, Mesdames et Messieurs Procureurs Généraux demanding Somthing Be Done about Robo-Calls (cetainly the 1st, 2nd and perhaps 3rd World Scourge of Telecommunications) in a missive to the Federal Commmunications Commission (FCC).

Now, whilst I do enthusiastically laud the Advocatus Generalis' cumulative effort to stem-the-tide of robotic-calling systems - that enthusiasm is tempered by the herculean proposition it is to make such a request of the FCC, as Charman Pai of the Commission is far too busy casting his Reese's Peanut Butter Cup soaked visage for former employer Verizon and the other telcos' interests, rather than the People's Business.

 Image Credit:   TransNexus

Image Credit: TransNexus

"As these illegal telemarketing scams are estimated to have stolen 9.5 billion dollars from consumers, the letter urges the FCC to push for new protocols that can further help to battle these scams. These protocols are STIR (Secure Telephone Identity Revisited) and SHAKEN (Secure Handling of Asserted information using toKENs) and can be used by telephone providers to identify legitimate calls and those from bad actors..." - via Lawrence Abrams> , writing at Bleeping Computer



Infosecurity.US

Infosecurity.US: El Cubano de Googlery

Otherwise known as a 'Memoranda of Understanding', and considering no information is available on what Google Inc. (Nasdaq: GOOG) agreed to 'understand' in their 'Memoranda of Understanding' with the Republic of Cuba and the ruling party - the Cuban Communists...

Let's just say the deal between Sundar Pichai (you know Sundar - he's the decision-maker who decided - with his decision-making-super-decision-powers to not tell Google+ users (also known as 'the Product') that Google, Inc. lost their data due to flawed code-smithing in an API...) and the Cuban Communist Party is a might murky down Havana Way...



Infosecurity.US

The State of Security: Women in Information Security: Alana Staszczyszyn

Last time, I had the privilege of speaking with web security specialist Pam Armstrong. This time I got to chat with Alana Staszczyszyn, someone whom I’ve had the pleasure of meeting in person. She’s very active in Toronto’s cybersecurity scene. She’s currently a student, but she has so much to teach people in our industry […]… Read More

The post Women in Information Security: Alana Staszczyszyn appeared first on The State of Security.



The State of Security

Women in Information Security: Alana Staszczyszyn

Last time, I had the privilege of speaking with web security specialist Pam Armstrong. This time I got to chat with Alana Staszczyszyn, someone whom I’ve had the pleasure of meeting in person. She’s very active in Toronto’s cybersecurity scene. She’s currently a student, but she has so much to teach people in our industry […]… Read More

The post Women in Information Security: Alana Staszczyszyn appeared first on The State of Security.

Vizio Agrees to $17M Settlement to Resolve Smart TV Class Action Suit

Vizio, Inc. (“Vizio”), a California-based company best known for its internet-connected televisions, agreed to a $17 million settlement that, if approved, will resolve multiple proposed consumer class actions consolidated in California federal court. The suits’ claims, which are limited to the period between February 1, 2014 and February 6, 2017, involve data-tracking software Vizio installed on its smart TVs. The software allegedly identified content displayed on Vizio TVs and enabled Vizio to determine the date, time, channel of programs and whether a viewer watched live or recorded content. The viewing patterns were connected to viewer’s IP addresses, though never, Vizio emphasized in its press release announcing the proposed settlement, to an individual’s name, address, or similar identifying information. According to Vizio, viewing data allows advertisers and programmers to develop content better aligned with consumers’ preferences and interests.  

Among other claims, the suits allege that Vizio failed to adequately disclose its surveillance practices and obtain consumers’ express consent before collecting the information. The various suits, some of which were filed in 2015, were consolidated in California’s Central District in April 2016 and subsequently survived Vizio’s motion to dismiss. Vizio had argued that several of the claims were deficient, and contended that the injunctive relief claims were moot in light of a February 2017 consent decree resolving the Federal Trade Commission’s (“FTC”) complaint over Vizio’s collection and use of viewing data and other information. To settle the FTC case, Vizio agreed, among other things, to stop unauthorized tracking, to prominently disclose its TV viewing collection practices and to get consumers’ express consent before collecting and sharing viewing information.

The parties notified the district court in June that they struck a settlement in principle. On October 4, 2018, they jointly moved for preliminary settlement approval. Counsel for the consumers argued that the deal is fair, because revenue that Vizio obtained from sharing consumers’ data will be fully disgorged and class members who submit a claim will receive a proportion of the settlement of between $13 and $31, based on a 2 to 5 percent claims rate. Vizio also agreed to provide non-monetary relief including revised on-screen disclosures concerning its viewing data practices and deleting all viewing data collected prior to February 6, 2017. The relief is pending until the court approves the settlement.

NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic.

The Draft Report is designed “to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their IoT devices throughout their lifecycles.” According to the Draft Report, “[m]any organizations are not necessarily aware they are using a large number of IoT devices. It is important that organizations understand their use of IoT because many IoT devices affect cybersecurity and privacy risks differently than conventional IT devices do.”

The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy.

In order to address those considerations and risk mitigation goals, the Draft Report provides the following recommendations:

  • Understand the IoT device risk considerations and the challenges they may cause to mitigating cybersecurity and privacy risks for devices in the appropriate risk mitigation areas.
  • Adjust organizational policies and processes to address the cybersecurity and privacy risk mitigation challenges throughout the IoT device lifecycle.
  • Implement updated mitigation practices for the organization’s IoT devices as you would any other changes to practices.

Comments are due by October 24, 2018.

Women in Information Security: Pam Armstrong

Last time, I spoke with Sharka. She’s a pentester who knows how to hack a glucose meter. She also taught me a few things about physical security. Now I get to talk with Pam Armstrong. Web development eventually led her to healthcare security. Kim Crawley: Please tell me about what you do. Pam Armstrong: I […]… Read More

The post Women in Information Security: Pam Armstrong appeared first on The State of Security.

CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL”) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities.

What is a Blockchain?

A blockchain is a database in which data is stored and distributed over a high number of computers and all entries into that database (called “transactions”) are visible by all the users of the blockchain. It is a technology that can be used to process personal data and is not a processing activity in itself.

Scope of the CNIL’s Assessment

The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains.

  • DLT solutions are not blockchains and are too recent and rare to allow the CNIL to carry out a generic analysis.
  • Private blockchains are defined by the CNIL as blockchains under the control of a party that has sole control over who can join the network and who can participate in the consensus process of the blockchain (i.e., the process for determining which blocks get added to the chain and what the current state is). These private blockchains are simply classic distributed databases. They do not raise specific GDPR compliance issues, unlike public blockchains (i.e., blockchains that anyone in the world can read or send transactions to, and expect to see included if valid, and anyone in the world can participate in the consensus process) and consortium blockchains (i.e., blockchains subject to rules that define who can participate in the consensus process or even conduct transactions).

In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain.

Role of Actors in a Blockchain Network

The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain.

If a group of participants decides to implement a processing activity on a blockchain for a common purpose, the participants should identify the data controller upstream, e.g., by (1) creating an entity and appointing that entity as the data controller, or (2) appointing the participant who takes the decisions for the group as the data controller. Otherwise, they could all be considered as joint data controllers.

According to the CNIL, data processors within the meaning of the GDPR may be (1) smart contract developers who process personal data on behalf of the participant – the data controller, or (2) miners who validate the recording of the personal data in the blockchain. The qualification of miners as data processors may raise practical difficulties in the context of public blockchains, since that qualification requires miners to execute with the data controller a contract that contains all the elements provided for in Article 28 of the GDPR. The CNIL announced that it was currently conducting an in-depth reflection on this issue. In the meantime, the CNIL encouraged actors to use innovative solutions enabling them to ensure compliance with the obligations imposed on the data processor by the GDPR.

How to Minimize Risks To Data Subjects

  • Assessing the appropriateness of using blockchain

As part of the Privacy by Design requirements under the GDPR, data controllers must consider in advance whether blockchain technology is appropriate to implement their data processing activities. Blockchain technology is not necessarily the most appropriate technology for all processing of personal data, and may cause difficulties for the data controller to ensure compliance with the GDPR, and in particular, its cross-border data transfer restrictions. In the CNIL’s view, if the blockchain’s properties are not necessary to achieve the purpose of the processing, data controllers should give priority to other solutions that allow full compliance with the GDPR.

If it is appropriate to use blockchain technology, data controllers should use a consortium blockchain that ensures better control of the governance of personal data, in particular with respect to data transfers outside of the EU. According to the CNIL, the existing data transfer mechanisms (such as Binding Corporate Rules or Standard Contractual Clauses) are fully applicable to consortium blockchains and may be implemented easily in that context, while it is more difficult to use these data transfer mechanisms in a public blockchain.

  • Choosing the right format under which the data will be recorded

As part of the data minimization requirement under the GDPR, data controllers must ensure that the data is adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed.

In this respect, the CNIL recalled that the blockchain may contain two main categories of personal data, namely (1) the credentials of participants and miners and (2) additional data entered into a transaction (e.g., diploma, ownership title, etc.) that may relate to individuals other than the participants and miners.

The CNIL noted that it was not possible to further minimize the credentials of participants and miners since such credentials are essential to the proper functioning of the blockchain. According to the CNIL, the retention period of this data must necessarily correspond to the lifetime of the blockchain.

With respect to additional data, the CNIL recommended using solutions in which (1) data in cleartext form is stored outside of the blockchain and (2) only information proving the existence of the data is stored on the blockchain (i.e., cryptographic commitment, fingerprint of the data obtained by using a keyed hash function, etc.).

In situations in which none of these solutions can be implemented, and when this is justified by the purpose of the processing and the data protection impact assessment revealed that residual risks are acceptable, the data could be stored either with a non-keyed hash function or, in the absence of alternatives, “in the clear.”

How to Ensure that Data Subjects Can Effectively Exercise Their Data Protection Rights

According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.)

However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased.

In this respect, the CNIL pointed out that technical solutions exist to move towards compliance with the GDPR. This is the case if the data is stored on the blockchain using a cryptographic method (see above). In this case, the deletion of (1) the data stored outside of the blockchain and (2) the verification elements stored on the blockchain, would render the data almost inaccessible.

With respect to the right to rectification of personal data, the CNIL recommended that the data controller enter the updated data into a new block since a subsequent transaction may cancel the first transaction, even if the first transaction will still appear in the chain. The same solutions as those applicable to requests for erasure could be applied to inaccurate data if that data must be erased.

Security Requirements

The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.

Next Steps

In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology.

CVE Funding and Process

I had not seen this interesting letter (August 27, 2018) from the House Energy and Commerce Committee to DHS about the nature of funding and support for the CVE.

This is the sort of thoughtful work that we hope and expect government departments do, and kudos to everyone involved in thinking about how CVE should be nurtured and maintained.

California Enacts New Requirements for Internet of Things Manufacturers

On September 28, 2018, California Governor Jerry Brown signed into law two identical bills regulating Internet-connected devices sold in California. S.B. 327 and A.B. 1906 (the “Bills”), aimed at the “Internet of Things,” require that manufacturers of connected devices—devices which are “capable of connecting to the Internet, directly or indirectly,” and are assigned an Internet Protocol or Bluetooth address, such as Nest’s thermostat—outfit the products with “reasonable” security features by January 1, 2020; or, in the bills’ words: “equip [a] device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure[.]”

According to Bloomberg Law, the Bills’ non-specificity regarding what “reasonable” features include is intentional; it is up to the manufacturers to decide what steps to take. Manufacturers argue that the Bills are egregiously vague, and do not apply to companies that import and resell connected devices made in other countries under their own labels.

The Bills are opposed by the Custom Electronic Design & Installation Association, Entertainment Software Association and National Electrical Manufacturers Association. They are sponsored by Common Sense Kids Action; supporters include the Consumer Federation of America, Electronic Frontier Foundation and Privacy Rights Clearinghouse.

Top Skills Next-level InfoSec Pros Should Have

In today’s digital world, infosec professionals need to be on top of their skills to protect themselves and their company’s assets online. There are numerous training paths and a large series of practical training courses that can teach you the right skills and hands-on know-how, but whether you are on the defensive or offensive side, here are the top skills next-level infosec pros should have:

1. Threat Analysis & Classification

✔ WHAT IS IT?

Threat analysis & classification is the act of detecting potential attacks in an attempt to protect sensitive corporate data over a length of time.

✔ WHY IS IT IMPORTANT?

Data analysis and classification is the first step to building a secure organization. Data should be based on nominal values according to their sensitivity, such as ‘Public’, ‘Internal’, ‘Confidential’, ‘Top Secret’, etc. High-risk data, typically classified “Confidential”, requires a greater level of protection, while lower risk data, possibly labeled “internal” requires proportionately less protection.

2. Auditing & Penetration Testing

✔ WHAT IS IT?

A penetration test (or pentest) evaluates the security of assets by running a series of planned attacks with the goal of finding and exploiting vulnerabilities. Auditing is a systematic, measurable technical assessment of how the organization’s security policy is employed.

✔ WHY IS IT IMPORTANT?

With cyber attacks happening every single day, it is more important than ever to undertake regular pentests in order to identify your company’s vulnerabilities and ensure on a regular basis that everything is under control. Security audits are also important because they help make more informed decisions on how to allocate budgets and resources to better manage risks.

3. Vulnerability Assessment

✔ WHAT IS IT?

A vulnerability assessment is aimed at building a list of all the vulnerabilities present on a target’s systems.

✔ WHY IS IT IMPORTANT?

A thorough vulnerability assessment will provide you and your organization with the knowledge, awareness, and background necessary to understand potential threats and react accordingly.

4. Reverse Engineering

✔ WHAT IS IT?

Reverse-engineering is the process of taking software or hardware and understanding its functions and information flow, through detailed analysis of a human-readable representation.

✔ WHY IS IT IMPORTANT?

Through reverse-engineering, IT security professionals can analyze critical software or hardware components/functions that would be otherwise invisible and that could lead to the identification of vulnerabilities. In addition, reverse-engineering is a critical skill to have in order to perform in-depth debugging.

5. Risk Assessment

✔ WHAT IS IT?

A risk assessment identifies the gaps in an organization’s critical risk areas and determines actions to close those gaps.

✔ WHY IS IT IMPORTANT?

Risk assessments are very important as they help to create awareness and identify who and/or what may be at risk in your organization.

Aspiring to build your career as a professional pentester? Download our free whitepaper “How to Become a Penetration Tester”
GET FREE WHITEPAPER

Sources: Intigrow | Mass.Gov

Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

CIPL Submits Comments on Draft Indian Data Protection Bill

On September 26, 2018, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Indian Ministry of Electronics and Information Technology on the draft Indian Data Protection Bill 2018 (“Draft Bill”).

CIPL’s comments on the Draft Bill focus on several key issues that are of particular importance for any modern-day data protection law, including increased emphasis on accountability and the risk-based approach to data processing, interoperability with other data protection laws globally, the significance of having a variety of legal bases for processing and not overly relying on consent, the need for extensive and flexible data transfer mechanisms, and the importance of maximizing the effectiveness of the data protection authority.

Specifically, the comments address the following key issues:

  • the Draft Bill’s extraterritorial scope;
  • the standard for anonymization;
  • notice requirements;
  • accountability and the risk-based approach;
  • legal bases for processing, including importance of the reasonable purposes ground;
  • sensitive personal data;
  • children’s data;
  • individual rights;
  • data breach notification;
  • Data Protection Impact Assessments;
  • record-keeping requirements and data audits;
  • Data Protection Officers;
  • the adverse effects of a data localization requirement;
  • cross-border transfers;
  • codes of practice; and
  • the timeline for adoption.

These comments were formed as part of CIPL’s ongoing engagement in India. In January 2018, CIPL responded to the Indian Ministry of Electronics and Information Technology’s public consultation on the White Paper of the Committee of Experts on a Data Protection Framework for India.

SIEM Guide: A Comprehensive View of Security Information and Event Management Tools

A security information and event management system, or SIEM (pronounced “SIM”), is a security system that ingests event data from a wide variety of sources such as security software and

The post SIEM Guide: A Comprehensive View of Security Information and Event Management Tools appeared first on The Cyber Security Place.

How To Spot A Phishing Email

Phishing emails are the most common way for criminals to distribute malicious software or obtain sensitive information. Is your business safe from these attacks? We have some tips that you

The post How To Spot A Phishing Email appeared first on The Cyber Security Place.

Women in Information Security: Sharka

Due to popular demand, my women in information security interview series is back for autumn! This marks the second anniversary since I started. Some of my subjects in this round have been waiting since last spring, so getting to chat with them has been long overdue. Let’s start with Sharka, a penetration tester who is […]… Read More

The post Women in Information Security: Sharka appeared first on The State of Security.

No Time for Complacency: Watch Your Back on Biometrics, Compliance, and Insider Threats

Over the coming years, the very foundations of today’s digital world will shake – violently. Innovative and determined attackers, along with seismic changes to the way organizations conduct their operations,

The post No Time for Complacency: Watch Your Back on Biometrics, Compliance, and Insider Threats appeared first on The Cyber Security Place.

Canadian Regulator Seeks Public Comment on Breach Reporting Guidance

As reported in BNA Privacy Law Watch, the Office of the Privacy Commissioner of Canada (the “OPC”) is seeking public comment on recently released guidance (the “Guidance”) intended to assist organizations with understanding their obligations under the federal breach notification mandate, which will take effect in Canada on November 1, 2018. 

Breach notification in Canada has historically been governed at the provincial level, with only Alberta requiring omnibus breach notification. As we previously reported, effective November 1, organizations subject to the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) will be required to notify affected individuals and the OPC of security breaches involving personal information “that pose a real risk of significant harm to individuals.” The Guidance, which is structured in a question-and-answer format, is intended to assist companies with complying with the new reporting obligation. The Guidance describes, among other information, (1) who is responsible for reporting a breach, (2) what types of incidents must be reported, (3) how to determine whether there is a “real risk of significant harm,” (4) what information must be included in a notification to the OPC and affected individuals, and (5) an organization’s recordkeeping requirements with respect to breaches of personal information, irrespective of whether such breaches are notifiable. The Guidance also contains a proposed breach reporting form for notifying the OPC pursuant to the new notification obligation.

The OPC is accepting public comment on the Guidance, including on the proposed breach reporting form. The deadline for interested parties to submit comments is October 2, 2018.

Keeping Your Personal Information Safe

By Vasilii Chekalov EveryCloud, According to a study by Statista in March of 2018, 63% of respondents expressed concern that they would be hacked in the next five years. 60%

The post Keeping Your Personal Information Safe appeared first on The Cyber Security Place.

Infosecurity.US: Gerhard Jacob’s ‘Taking Stock: The Internet of Things and Machine Learning Algorithms at War’

Image Credit , Israeli Defense Forces, The IDF Desert Rreconnaissance Battalion Training Exercises

Terrific blog post by Gerhard Jacobs, writing at the Imperva Cybersecurity blog, and discussing IoT and ML with Gilad Yehudai (Gilad is a Security Research Engineer at Imperva), this time where connect devices and machine learning both interact and inform war fighting and warrior capabilities. Today's Must Read.



Infosecurity.US

Matt Flynn: Information Security | Identity & Access Mgmt.: Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 ...
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security

Matt Flynn: Information Security | Identity & Access Mgmt.

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's are a couple of excerpts:
Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.
 ...
 Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.
Click to read the full article: Convergence is the Key to Future-Proofing Security

Infosecurity.US: Hardware Security, Ramtin Amin’s Take

Friend of the Blog Trey Blalock of Firewall Consultants sent a link in yesterday which amgically trasnprted us to Ramtin Amin's Web Blog yesterday (in actuality, a Hardware Security blog of considerable reknown)(gracias Trey!). Ramtin's work is indicative of a curious intellect, and tremendous hardware investigatory chops - (plus, keen eye-hand coordination!). If you are at all fascinated by hardware security (coupled with mobile telephony, femto-cells, cabling/dongles and the like) his blog will come as a refreshing changement de rythme of to-the-point discussions of same. Don't Doddle, Chop-Chop, Enjoy!



Infosecurity.US

Software Company Settles with New Jersey AG Over Data Breach

On September 7, 2018, the New Jersey Attorney General announced a settlement with data management software developer Lightyear Dealer Technologies, LLC, doing business as DealerBuilt, resolving an investigation by the state Division of Consumer Affairs into a data breach that exposed the personal information of car dealership customers in New Jersey and across the country. The breach occurred in 2016, when a researcher exposed a gap in the company’s security and gained access to unencrypted files containing names, addresses, social security numbers, driver’s license numbers, bank account information and other data belonging to thousands of individuals, including at least 2,471 New Jersey residents.

To resolve the investigation, DealerBuilt agreed to undertake a number of changes to its security practices to help prevent similar breaches from occurring in the future, including:

  • the creation of an information security program to be implemented and maintained by a chief security officer;
  • the maintenance and implementation of encryption protocols for personal information stored on laptops or other portable devices or transmitted wirelessly;
  • the maintenance and implementation of policies that clearly define which users have authorization to access its computer network;
  • the maintenance of enforcement mechanisms to approve or disapprove access requests based on those policies; and
  • the maintenance of data security assessment tools, including vulnerability scans.

In addition to the above, DealerBuilt agreed to an $80,784 settlement amount, comprised of $49,420 in civil penalties and $31,364 in reimbursement of the Division’s attorneys’ fees, investigative costs and expert fees.

Read the consent order resolving the investigation.

“Zero Trust” Is the Opposite of Business

The term zero trust has been cropping up a lot recently, with even a small conference on the topic recently. It sounds like an ideal security goal, but some caution is warranted.

The post “Zero Trust” Is the Opposite of Business appeared first on The Cyber Security Place.

NIST Launches Privacy Framework Effort

On September 4, 2018, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) announced a collaborative project to develop a voluntary privacy framework to help organizations manage privacy risk. The announcement states that the effort is motivated by innovative new technologies, such as the Internet of Things and artificial intelligence, as well as the increasing complexity of network environments and detail of user data, which make protecting individuals’ privacy more difficult. “We’ve had great success with broad adoption of the NIST Cybersecurity Framework, and we see this as providing complementary guidance for managing privacy risk,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan.

The goals for the framework stated in the announcement include providing an enterprise-level approach that helps organizations prioritize strategies for flexible and effective privacy protection solutions and bridge gaps between privacy professionals and senior executives so that organizations can respond effectively to these challenges without stifling innovation. To kick off the effort, the NIST has scheduled a public workshop on October 16, 2018, in Austin, Texas, which will occur in conjunction with the International Association of Privacy Professionals’ “Privacy. Security. Risk. 2018” conference. The Austin workshop is the first in a series planned to collect current practices, challenges and requirements in managing privacy risks in ways that go beyond common cybersecurity practices.

In parallel with the NIST’s efforts, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) is “developing a domestic legal and policy approach for consumer privacy.” The announcement stated that the NTIA is coordinating its efforts with the department’s International Trade Administration “to ensure consistency with international policy objectives.”

Podcast Episode 111: Click Here to Kill Everybody and CyberSN on Why Security Talent Walks

In this week’s podcast (episode #111), sponsored by CyberSN: what happens when the Internet gets physical? Noted author and IBM security guru Bruce Schneier joins us to talk about his new book on Internet of Things risk: Click Here to Kill Everybody. Also: everyone knows that cyber security talent is hard to come by, and even harder to keep....

Read the whole entry... »

Related Stories

5 InfoSec Trends To Watch For

The infosec industry is booming, so it’s no surprise that new challenges and cyber threats will face IT departments for the years to come. With that in mind, what are the InfoSec trends we should all watch out for?

According to the Ponemon Institute, the global average cost of a data breach is down 10% over previous years to $3.62 million. However, this does not mean you should let your guards down. Here are some of the InfoSec trends you should watch out for:

Ransomware keeps growing

With 34% of people globally willing to pay a ransom to get their data back, and an increasing 64% for Americans, cybercriminals are more than ever motivated to raise their stakes… in terms of victims, and ransom demands.

Higher demand for skilled professionals 

InfoSec jobs require specialized skills and extensive practical training. More sophisticated threats and techniques are discovered every day and require professionals to stay up-to-date. To address this issue, and stay up-to-date on the latest skills and techniques, we recommend lots of reading, constant studying and practicing (every 3-6 months).

Trusting one or a team of individuals to protect and defend their digital assets has become an issue for companies (see next point). For this reason, we’ll see more and more temporary-hired professionals such as bug bounty hunters, consultants, free-lancers, etc.

Data breaches are on the rise

According to The Economist, oil no longer is the World’s most valuable resource, data is. Data is growing faster than ever before, and by the year 2020, about 1.7 megabytes of new information will be created every second for every human being. In the quest for more power, cybercriminals are making data their favorite past-time.

In addition, a global survey from Symantec suggests that 50% of employees keeps confidential corporate data after leaving or losing their jobs, putting insiders’ threats at the same level of risk. Finding the right bodies to protect and defend their assets remains an important key goal for corporations in 2018.

Attackers get smarter

With new technologies, software, and techniques developed each day – and with sanctions getting harder – attackers have no other option than getting smarter themselves… or put their skills up to defending organizations instead. To catch a hacker, you must first think like one.

Cyber risk insurance becomes more common

As the InfoSec industry evolves, we might see more of cyber insurance coverage for loss of trust with their customers, loss of future revenue from negative media, and improvement costs for security infrastructure or system upgrades.

Aspiring to become a Web Application Penetration Tester? Learn updated web app security skills with a free trial of the WAPTv3 training:
Get My Free Trial

Interested in knowing more about how we can help develop your IT Security team and new hires’ skill set? Click here to schedule a corporate demo

Sources: CSO, Information Age, CMS Wire, Forbes

Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

Belgium Publishes Law Adapting the Belgian Legal Framework to the GDPR

On September 5, 2018, the Law of 30 July 2018 on the Protection of Natural Persons with regard to the Processing of Personal Data (the “Law”) was published in the Belgian Official Gazette.

This is the second step in adapting the Belgian legal framework to the EU GDPR after the Law of 3 December 2017 Creating the Data Protection Authority, which reformed the Belgian Data Protection Authority.

The Law is available in French and Dutch.

EU Begins Formal Approval for Japan Adequacy Decision

On September 5, 2018, the European Commission (the “Commission”) announced in a press release the launch of the procedure to formally adopt the Commission’s adequacy decision with respect to Japan.

The press release notes that the EU-Japan talks on personal data protection were completed in July 2018, and announces the publication of the draft adequacy decision and related documents which, among other things, set forth the additional safeguards Japan will accord EU personal data that is transferred to Japan. According to the release, Japan is undertaking a similar formal adoption process concerning the reciprocal adequacy findings between the EU and Japan.

The adequacy decision intends to ensure that Japan provides privacy protections for EU personal data that are “essentially equivalent” to the EU standard. The key elements of the agreement include:

  • Specific safeguards to be applied by Japan to bridge the difference between EU and Japanese standards on issues such as sensitive data, onward transfer of EU data to third countries, and the right to access and rectification.
  • Enforcement by the Japan Personal Information Protection Commission.
  • Safeguards concerning access to EU personal data by Japanese public authorities for law enforcement and national security purposes.
  • A complaint-handling mechanism.

The press release also notes that the adequacy decision will complement the EU-Japan Economic Partnership Agreement by supporting free data flows between the EU and Japan and providing for privileged access to 127 million Japanese consumers.

Finally, the press release also outlines the next four steps in the formal approval process:

  • Opinion from the European Data Protection Board.
  • Consultation of a committee composed of representatives from the EU Member States (comitology procedure).
  • Update of the European Parliament Committee on Civil Liberties, Justice and Home Affairs.
  • Adoption of the adequacy decision by the College of Commissioners.

Social-Engineer Newsletter Vol 08 – Issue 108

 

Vol 08 Issue 108
September 2018

In This Issue

  • Information Security, How Well is it Being Used to Protect Our Children at School?
  • Social-Engineer News
  • Upcoming classes

As a member of the newsletter you have the option to OPT-IN for special offers. You can click here to do that.


Check out the schedule of upcoming training on Social-Engineer.com

3-4 October, 2018 Advanced Open Source Intelligence for Social Engineers – Louisville, KY (SOLD OUT)

If you want to ensure your spot on the list register now – Classes are filling up fast and early!


The SEVillage at Def Con 26 would not have been possible without it’s amazing Sponsors!

Thank you to our Sponsor for SEVillage at DerbyCon 8.0!


Do you like FREE Stuff?

How about the first chapter of ALL OF Chris Hadnagy’s Best Selling Books

If you do, you can register to get the first chapter completely free just go over to http://www.social-engineer.com to download now!


To contribute your ideas or writing send an email to contribute@social-engineer.org


If you want to listen to our past podcasts hit up our Podcasts Page and download the latest episodes.


Our good friends at CSI Tech just put their RAM ANALYSIS COURSE ONLINE – FINALLY.

The course is designed for Hi-Tech Crime Units and other digital investigators who want to leverage RAM to acquire evidence or intelligence which may be difficult or even impossible to acquire from disk. The course does not focus on the complex structures and technology behind how RAM works but rather how an investigator can extract what they need for an investigation quickly and simply

Interested in this course? Enter the code SEORG and get an amazing 15% off!
http://www.csitech.co.uk/training/online-ram-analysis-for-investigators/

You can also pre-order, CSI Tech CEO, Nick Furneaux’s new book, Investigating Cryptocurrencies: Understanding, Extracting, and Analyzing Blockchain Evidence now!


The team at Social-Engineer, LLC proudly uses:


A Special Thanks to:

The EFF for supporting freedom of speech

Keep Up With Us

Friend on Facebook Facebook
Follow on Twitter Twitter

Information Security, How Well is it Being Used to Protect Our Children at School?

Information Security, How Well is it Being Used to Protect Our Children at School?

August and September are ordinary months to some, but to others they are a time of mixed emotions. It’s the start of another school year. Some are sad to see their children off, while others celebrate that day. The start of the school year brings with it a lot of paperwork and sharing of sensitive information. How well is information security being used to protect our children’s information, and even the school staff’s, personally identifiable information (PII)? How well is it being used to protect against social engineering attacks?

Think about the information that the schools keep; when you registered your child, you may have had to give them copies of their birth certificate, social security number, your phone number, and other personal information. You may have had to give your own social security number, especially if you had to fill out an application for free and reduced-price meals, or you had to register to volunteer at the school. If your child is in a college or university, even more information has to be given such as financial records, medical records, and high school transcripts. What is being done to keep that information secure?

When I read the following headlines they make me a little concerned, how about you?

These are only a few of the many stories out there. According to the Breach Level Index by Gemalto, the education sector had 33.4 million records breached in 2017 and a total of 199 reported breaches. This is a 20% increase of reported incidents over 2016. It gives meaning as to how widespread the incidents are when I see it visually on the K-12 Cyber Incident Map by the K-12 Cybersecurity Resource Center.

Who are breaching school networks and why are they doing it

Who are trying to breach a school’s network? It’s not just the student doing it to change grades or for fun, it’s also the elite attacker and the common cybercriminal. Thanks to the ease of availability of hacking tools, and the sharing of malicious attack techniques on the dark web, they are able to install ransomware, encrypt drives, and demand payment to decrypt them. They are also able to exfiltrate PII and passwords to gain further access to networks and steal and create identities. Identity thieves will use the child’s information to create their own false identity where they can take out credit cards and loans, ruining your child’s credit. When this happens, it can make it difficult to get a license, go to college, or get any loans.

How are they doing it?

Cybercriminals are opportunists who will take advantage of any vulnerabilities, especially with organizations that are less secure. Unfortunately for educational institutions, their security stance is usually poor and at a high risk. They battle staffing and budgetary constraints, their view of cybersecurity has been one of a low priority, and they view security as an inconvenience.

Another point of weakness is the ease of accessibility to the school’s network. They usually have free Wi-fi, large numbers of desktop and mobile devices, and weak passwords which all present potential points of entry into the network. In addition, students will browse the web from insecure networks and often pick up malware which can then be inadvertently shared with others via email or uploads of coursework to the secure school network.

So, what do cybercriminals do? They use a variety of web- and email-based attacks that are at their disposal. One web-based attack is that they actively target sites where students will commonly browse. These are often completely legitimate sites, such as Thesaurus.com. No click required; just viewing the ad can initiate the malware download.

An example of an email-based (phishing) attack targeting education was at Northeastern University, where some Blackboard Learning users were targeted by an email that tried to influence the reader into clicking a link that was disguised to be legitimate and tried to compel the action by using a time constraint.

With web- and email-based attacks, the cybercriminal can deliver ransomware and steal student records. All at a great cost to the school system and to those that have their information compromised.

What can be done?

When it comes to protecting our children we are willing to do anything, so what can we do to protect our children’s information?

Here are some things that parents can do:

1. Make sure that the personal computer that is used to log into the school’s network is up-to-date;

2. Make sure that computer has more than just an antivirus installed, add malware protection as well;

3. Be proactive and educate yourself and your children on security awareness;

  • Read the Social Engineer Framework;
  • Have your child create usernames that don’t contain personal information, such as birth year;
  • Look at using a private VPN when on an insecure network, such as at Starbucks. Trustworthy VPNs will usually have a fee for using them;
  • Teach children the importance of not giving out information;
  • Use a secure password manager and don’t share passwords;
  • Make sure teens don’t take a picture of their license and share it on social media; and
  • Don’t throw important documents in the trash, shred them.

4. Be watchful of your student’s browsing activity; and

5. Something you may wish to look into is an identity theft protection service to protect your child against identity theft.

Remember that just because you are asked to give out information doesn’t mean you have to. Ask, “why is it necessary for them to have that information?”

Schools need to follow the industry best practice in information security and we, as parents, need to demand that it be done. Schools should also be forced to address the human element in security:

  • Staff, teachers, students, and parents need to be educated and used as a line of defense; and
  • Institute security awareness training which includes: Performing simulated phishing exercises; Recruiting on-campus security advocates; and Holding onsite security education activities, lectures, and in-class training.

Following these suggestions will help to protect our children’s information at school.

Need Inspiration?

If you want some inspiration, look at what some schools are doing:

  • One example is that the July 2017 article of The Educator in San Diego, CA said that, “the local ESET office runs an annual cyber boot camp for about 50 middle and high school students.”
  • Another example was in the June 2017 article of The Educator, where it discusses how the Macquarie University in Australia uses the BlackBerry AtHoc as part of the University’s Emergency Management Plan and that the system will assist the school in managing and mitigating social engineering incidents, for example, by sending a message to staff and students recommending not to open a certain email or click on a certain link.

To some, the suggestions may be easier said than done, but, if they aren’t followed, the school nearest you may be the next cybersecurity incident we read about. Information security must be implemented to protect the sensitive information (PII) that is housed at the schools, especially that of protecting our children’s information.

Stay safe and secure.

Written By: Mike Hadnagy

Sources:

https://www.theeducatoronline.com/au/news/is-your-school-protected-against-cyber-threats/237855

https://www.theeducatoronline.com/au/technology/infrastructure-and-equipment/how-malware-could-be-threatening-your-school/246146

https://edtechmagazine.com/k12/article/2016/04/how-ever-worsening-malware-attacks-threaten-student-data

https://blogs.cisco.com/education/the-surprisingly-high-cost-of-malware-in-schools-and-how-to-stop-it

https://blog.barkly.com/school-district-malware-cyber-attacks

https://in.pcmag.com/asus-zenpad-s-80-z580ca/124559/news/facebook-serves-up-internet-101-lessons-for-kids

https://www.stuff.co.nz/business/105950814/schools-promised-better-protection-from-ransomware-as-taranaki-school-blackmailed

https://www.eset.com/int/about/why-eset/

As part of the newsletter group, you will be the first to receive special offers to services and products by Social-Engineer.Com.


 

 

The post Social-Engineer Newsletter Vol 08 – Issue 108 appeared first on Security Through Education.

Making Sense of Microsoft’s Endpoint Security Strategy

Microsoft is no longer content to simply delegate endpoint security on Windows to other software vendors. The company has released, fine-tuned or rebranded  multiple security technologies in a way that will have lasting effects on the industry and Windows users. What is Microsoft’s endpoint security strategy and how is it evolving?

Microsoft offers numerous endpoint security technologies, most of which include “Windows Defender” in their name. Some resemble built-in OS features (e.g., Windows Defender SmartScreen), others are free add-ons (e.g., Windows Defender Antivirus), while some are commercial enterprise products (e.g., the EDR component of Windows Defender Advanced Threat Protection). I created a table that explains the nature and dependencies of these capabilities in a single place. Microsoft is in the process of unifying these technologies under the Windows Defender Advanced Threat Protection branding umbrella—the name that originally referred solely to the company’s commercial incident detection and investigation product.

Microsoft’s approach to endpoint security appears to be pursuing the following 3 objectives:

  • Motivate other vendors to innovate beyond the commodity security controls that Microsoft offers for its modern OS versions. Windows Defender Antivirus and Windows Defender Firewall with Advanced Security (WFAS) on Windows 10 are examples of such tech. Microsoft has been expanding these essential capabilities to be on par with similar features of commercial products. This not only gives Microsoft control over the security posture of its OS, but also forces other vendors to tackle the more advanced problems on the basis of specialized expertise or other strategic abilities.
  • Expand the revenue stream from enterprise customers. To centrally manage Microsoft’s endpoint security layers, organizations will likely need to purchase System Center Configuration Manager (SCCM) or Microsoft Intune. Obtaining some Microsoft’s security technologies, such as the EDR component of Windows Defender Advanced Threat Protection, requires upgrading to the high-end Windows Enterprise E5 license. By bundling such commercial offerings with other products, rather than making them available in a standalone manner, the company motivates customers to shift all aspects of their IT management to Microsoft.

In pursuing these objectives, Microsoft developed the building blocks that are starting to resemble features of commercial Endpoint Protection Platform (EPP) products. The resulting solution is far from perfect, at least at the moment:

  • Centrally managing and overseeing these components is difficult for companies that haven’t fully embraced Microsoft for all their IT needs or that lack expertise in technologies such as Group Policy.
  • Making sense of the security capabilities, interdependencies and licensing requirements is challenging, frustrating and time-consuming.
  • Most of the endpoint security capabilities worth considering are only available for the latest versions of Windows 10 or Windows Server 2016. Some have hardware dependencies are incompatible with older hardware.
  • Several capabilities have dependencies that are incompatible with other products.  For instance, security features that rely on Hyper-V prevent users from using the VMware hypervisor on the endpoint.
  • Some technologies are still too immature or impractical for real-world deployments. For example, using my Windows 10 system after enabling the Controlled folder access feature became unbearable after a few days.
  • The layers fit together in an awkward manner at times. For instance, Microsoft provides two app whitelisting technologies—Windows Defender Application Control (WDAC) and AppLocker—that overlap in some functionality.

While infringing on the territory traditionally dominated by third-parties on the endpoint, Microsoft leaves room for security vendors to provide value and work together with Microsoft’s security technologies. For example:

Some of Microsoft’s endpoint security technologies still feel disjointed. They’re becoming less so, as the company fine-tunes its approach to security and matures its capabilities. Microsoft is steadily guiding enterprises towards embracing Microsoft as the de facto provider of IT products. Though not all enterprises will embrace an all-Microsoft vision for IT, many will. Endpoint security vendors will need to crystallize their role in the resulting ecosystem, expanding and clarifying their unique value proposition. (Coincidentally, that’s what I’m doing at Minerva Labs, where I run product management.)

Retired Malware Samples: Everything Old is New Again

I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute.  Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.

A Backdoor with a Backdoor

To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.

Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:

“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”

Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server irc.slim.org.au that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.

Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.

You Are an Idiot

The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:

When the visitor attempted to navigate away from the offending site, its JavaScript popped up new instances of the page, making it very difficult to leave. Moreover, each instance of the page played the following jingle on the victim’s speakers. “You are an idiot,” the song exclaimed. “Ahahahahaha-hahahaha!” The cacophony of multiple windows blasting this jingle was overwhelming.

 

A while later I came across a network worm that played this sound file on victims’ computers, though I cannot find that sample anymore. While writing this post, I was surprised to discover a version of this page, sans the multi-window JavaScript trap, residing on www.youareanidiot.org. Maybe it’s true what they say: good joke never gets old.

Clipboard Manipulation

When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:

At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.

The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.

As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course.  It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.

Scammers Use Breached Personal Details to Persuade Victims

Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.

Personalized Porn Extortion Scam

Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:

“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?

I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”

The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.

The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.

Data Breach Lawsuit Scam

In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:

“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”

The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.

What to Do?

If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.

Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.

To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:

 

Cyber is Cyber is Cyber

If you’re in the business of safeguarding data and the systems that process it, what do you call your profession? Are you in cybersecurity? Information security? Computer security, perhaps? The words we use, and the way in which the meaning we assign to them evolves, reflects the reality behind our language. If we examine the factors that influence our desire to use one security title over the other, we’ll better understand the nature of the industry and its driving forces.

Until recently, I’ve had no doubts about describing my calling as an information security professional. Yet, the term cybersecurity is growing in popularity. This might be because the industry continues to embrace the lexicon used in government and military circles, where cyber reigns supreme. Moreover, this is due to the familiarity with the word cyber among non-experts.

When I asked on Twitter about people’s opinions on these terms, I received several responses, including the following:

  • Danny Akacki was surprised to discover, after some research, that the origin of cyber goes deeper than the marketing buzzword that many industry professionals believe it to be.
  • Paul Melson and Loren Dealy Mahler viewed cybersecurity as a subset of information security. Loren suggested that cyber focuses on technology, while Paul considered cyber as a set of practices related to interfacing with adversaries.
  • Maggie O’Reilly mentioned Gartner’s model that, in contrast, used cybersecurity as the overarching discipline that encompasses information security and other components.
  • Rik Ferguson also advocated for cybersecurity over information security, viewing cyber as a term that encompasses muliple components: people, systems, as well as information.
  • Jessica Barker explained that “people outside of our industry relate more to cyber,” proposing that if we want them to engage with us, “we would benefit from embracing the term.”

In line with Danny’s initial negative reaction to the word cyber, I’ve perceived cybersecurity as a term associated with heavy-handed marketing practices. Also, like Paul, Loren, Maggie and Rik, I have a sense that cybersecurity and information security are interrelated and somehow overlap. Jessica’s point regarding laypersons relating to cyber piqued my interest and, ultimately, changed my opinion of this term.

There is a way to dig into cybersecurity and information security to define them as distinct terms. For instance, NIST defines cybersecurity as:

“Prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation.”

Compare that description to NIST’s definition of information security:

“The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.”

From NIST’s perspective, cybersecurity is about safeguarding electronic communications, while information security is about protecting information in all forms. This implies that, at least according to NIST, information security is a subset of cybersecurity. While this nuance might be important in some contexts, such as regulations, the distinction probably won’t remain relevant for long, because of the points Jessica Barker raised.

Jessica’s insightful post on the topic highlights the need for security professionals to use language that our non-specialist stakeholders and people at large understand. She outlines a brief history that lends credence to the word cyber. She also explains that while most practitioners seem to prefer information security, this term is least understood by the public, where cybersecurity is much more popular. She explains that:

“The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.”

Technology and the role it plays in our lives continues to change. Our language evolves with it. I’m convinced that the distinction between cybersecurity and information security will soon become purely academic and ultimately irrelevant even among industry insiders. If the world has embraced cyber, security professionals will end up doing so as well. While I’m unlikely to wean myself off information security right away, I’m starting to gradually transition toward cybersecurity.

Communicating About Cybersecurity in Plain English

When cybersecurity professionals communicate with regular, non-technical people about IT and security, they often use language that virtually guarantees that the message will be ignored or misunderstood. This is often a problem for information security and privacy policies, which are written by subject-matter experts for people who lack the expertise. If you’re creating security documents, take extra care to avoid jargon, wordiness and other issues that plague technical texts.

To strengthen your ability to communicate geeky concepts in plain English, consider the following exercise: Take a boring paragraph from a security assessment report or an information security policy and translate it into a sentence that’s no longer than 15 words without using industry terminology. I’m not suggesting that the resulting statement should replace the original text; instead, I suspect this exercise will train you to write more plainly and succinctly.

For example, I extracted and slightly modified a few paragraphs from the Princeton University Information Security Policy, just so that I could experiment with some public document written in legalese. I then attempted to relay the idea behind each paragraph in the form of a 3-line haiku (5-7-5 syllables per line):

This Policy applies to all Company employees, contractors and other entities acting on behalf of Company. This policy also applies to other individuals and entities granted use of Company information, including, but not limited to, contractors, temporary employees, and volunteers.

If you can read this,
you must follow the rules that
are explained below.

When disclosing Confidential information, the proposed recipient must agree (i) to take appropriate measures to safeguard the confidentiality of the information; (ii) not to disclose the information to any other party for any purpose absent the Company’s prior written consent.

Don’t share without a
contract any information
that’s confidential.

All entities granted use of Company Information are expected to: (i) understand the information classification levels defined in the Information Security Policy; (ii) access information only as needed to meet legitimate business needs.

Know your duties for
safeguarding company info.
Use it properly.

By challenging yourself to shorten a complex concept into a single sentence, you motivate yourself to determine the most important aspect of the text, so you can better communicate it to others. This approach might be especially useful for fine-tuning executive summaries, which often warrant careful attention and wordsmithing. This is just one of the ways in which you can improve your writing skills with deliberate practice.

Security Product Management at Large Companies vs. Startups

Is it better to perform product management of information security solutions at a large company or at a startup? Picking the setting that’s right for you isn’t as simple as craving the exuberant energy of a young firm or coveting the resources and brand of an organization that’s been around for a while. Each environment has its challenges and advantages for product managers. The type of innovation, nature of collaboration, sales dynamics, and cultural nuances are among the factors to consider when deciding which setting is best for you.

The perspective below is based on my product management experiences in the field information security, though I suspect it’s applicable to product managers in other hi-tech environments.

Product Management at a Large Firm

In the world of information security, industry incumbents are usually large organizations. This is in part because growing in a way that satisfies investors generally requires the financial might, brand and customer access that’s hard for small cyber-security companies to achieve. Moreover, customers who are not early adopters often find it easier to focus their purchasing on a single provider of unified infosec solutions. These dynamics set the context for the product manager’s role at large firms.

Access to Customers

Though the specifics differs across organizations, product management often involves defining capabilities and driving adoption. The product manager’s most significant advantage at a large company is probably access to customers. This is due to the size of the firm’s sales and marketing organization, as well as due to the large number of companies that have already purchased some of the company’s products.

Such access helps with understanding requirements for new products, improving existing technologies, and finding new customers. For example, you could bring your product to a new geography by using the sales force present in that area without having to hire a dedicated team. Also, it’s easier to upsell a complementary solution than build a new customer relationship from scratch.

Access to Expertise

Another benefit of a large organization is access to funds and expertise that’s sometimes hard to obtain in a young, small company. Instead of hiring a full-time specialist for a particular task, you might be able to draw upon the skills and experience of someone who supports multiple products and teams. In addition, assuming your efforts receive the necessary funding, you might find it easier to pursue product objectives and enter new markets in a way that could be hard for a startup to accomplish. This isn’t always easy, because budgetary planning in large companies can be more onerous than Venture Capitalist fund raising.

Organizational Structure

Working in any capacity at an established firm requires that you understand and follow the often-changing bureaucratic processes inherent to any large entity. Depending on the organization’s structure, product managers in such environments might lack the direct control over the teams vital to the success of their product. Therefore, the product manager needs to excel at forming cross-functional relationships and influencing indirectly. (Coincidentally, this is also a key skill-set for many Chief Information Security Officers.)

Sometimes even understanding all of your own objectives and success criteria in such environments can be challenging. It can be even harder to stay abreast of the responsibilities of others in the corporate structure. On the other hand, one of the upsides of a large organization is the room to grow one’s responsibilities vertically and horizontally without switching organizations. This is often impractical in small companies.

What It’s Like at a Large Firm

In a nutshell, these are the characteristics inherent to product management roles at large companies:

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

I loved working as a security product manager at a large company. I was able to oversee a range of in-house software products and managed services that focused on data security. One of my solutions involved custom-developed hardware, with integrated home-grown and third-party software, serviced a team of help desk and in-the-field technicians. A fun challenge!

I also appreciated the chance to develop expertise in the industries that my employer serviced, so I could position infosec benefits in the context relevant to those customers. I enjoyed staying abreast of the social dynamics and politics of a siloed, matrixed organization. After awhile I decided to leave because I was starting to feel a bit too comfortable. I also developed an appetite for risk and began craving the energy inherent to startups.

Product Management in a Startup

One of the most liberating, yet scary aspects of product management at a startup is that you’re starting the product from a clean slate. On the other hand, while product managers at established companies often need to account for legacy requirements and internal dependencies, a young firm is generally free of such entanglements, at least at the onset of its journey.

What markets are we targeting? How will we reach customers? What comprises the minimum viable product? Though product managers ask such questions in all types of companies, startups are less likely to survive erroneous answers in the long term. Fortunately, short-term experiments are easier to perform to validate ideas before making strategic commitments.

Experimenting With Capabilities

Working in a small, nimble company allows the product manager to quickly experiment with ideas, get them implemented, introduce them into the field, and gather feedback. In the world of infosec, rapidly iterating through defensive capabilities of the product is useful for multiple reasons, including the ability to assess—based on real-world feedback—whether the approach works against threats.

Have an idea that is so crazy, it just might work? In a startup, you’re more likely to have a chance to try some aspect of your approach, so you can rapidly determine whether it’s worth pursuing further. Moreover, given the mindshare that the industry’s incumbents have with customers, fast iterations help understand which product capabilities, delivered by the startup, the customers will truly value.

Fluid Responsibilities

In all companies, almost every individual has a certain role for which they’ve been hired. Yet, the specific responsibilities assigned to that role in a young firm often benefit from the person’s interpretation, and are based on the person’s strengths and the company’s need at a given moment. A security product manager working at a startup might need to assist with pre-sales activities, take a part in marketing projects, perform threat research and potentially develop proof-of-concept code, depending on what expertise the person possesses and what the company requires.

People in a small company are less likely to have the “it’s not my job attitude” than those in highly-structured, large organizations. A startup generally has fewer silos, making it easier to engage in activities that interest the person even if they are outside his or her direct responsibilities. This can be stressful and draining at times. On the other hand, it makes it difficult to get bored, and also gives the product manager an opportunity to acquire skills in areas tangential to product management. (For additional details regarding this, see my article What’s It Like to Join a Startup’s Executive Team?)

Customer Reach

Product manager’s access to customers and prospects at a startup tends to be more immediate and direct than at a large corporation. This is in part because of the many hats that the product manager needs to wear, sometimes acting as a sales engineer and at times helping with support duties. These tasks give the person the opportunity to hear unfiltered feedback from current and potential users of the product.

However, a young company simply lacks the scale of the sales force that accommodates reaching many customers until the firm builds up steam. (See Access to Customers above.) This means that the product manager might need to help identifying prospects, which can be outside the comfort zone of individuals who haven’t participated in sales efforts in this capacity.

What It’s Like at a Startup

Here are the key aspects of performing product management at a startup:

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

I’m presently responsible for product management at Minerva Labs, a young endpoint security company. I’m loving the make-or-break feeling of the startup. For the first time, I’m overseeing the direction of a core product that’s built in-house, rather than managing a solution built upon third-party technology. It’s gratifying to be involved in the creation of new technology in such a direct way.

There are lots of challenges, of course, but every day feels like an adventure, as we fight for the seat at the big kids table, grow the customer base and break new ground with innovative anti-malware approaches. It’s a risky environment with high highs and low lows, but it feels like the right place for me right now.

Which Setting is Best for You?

Numerous differences between startups and large companies affect the experience of working in these firms. The distinction is highly pronounced for product managers, who oversee the creation of the solutions sold by these companies. You need to understand these differences prior to deciding which of the environments is best for you, but that’s just a start. Next, understand what is best for you, given where you are in life and your professional development. Sometimes the capabilities that you as a product manager will have in an established firm will be just right; at others, you will thrive in a startup. Work in the environment that appeals to you, but also know when (or whether) it’s time to make a change.

Information Security and the Zero-Sum Game

A zero-sum game is a mathematical representation of a situation in which each participant’s gain or loss is exactly balanced by the losses or gains of the other participant. In Information Security a zero-sum game usually references the trade-off between being secure and having privacy. However, there is another zero-sum game often played with Information […]

Everything that is happening now has happened before

While looking through old notebooks, I found this piece that I wrote in 2014 for a book that never got published. Reading it through it surprised me how much we are still facing the same challenges today as we did four years ago. Security awareness and security training are no different… So, you have just … Read More

Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest … Read More

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. I included a reference in the article to a book called Afterlife. In it, the protagonist, FBI Agent Will Brody says "If you never change tactics, you lose the moment the enemy changes theirs." It's a fitting quote. Not only must we adapt to survive, we need to deploy IT on a platform that's designed for constant change, for massive scale, for deep analytics, and for autonomous security. New World, New Rules.

Here are a few excerpts:
Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.
Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.
Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."
Click to read the full article: New World, New Rules: Securing the Future State.

Practical Tips for Creating and Managing New Information Technology Products

This cheat sheet offers advice for product managers of new IT solutions at startups and enterprises. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Responsibilities of a Product Manager

  • Determine what to build, not how to build it.
  • Envision the future pertaining to product domain.
  • Align product roadmap to business strategy.
  • Define specifications for solution capabilities.
  • Prioritize feature requirements, defect correction, technical debt work and other development efforts.
  • Help drive product adoption by communicating with customers, partners, peers and internal colleagues.
  • Participate in the handling of issue escalations.
  • Sometimes take on revenue or P&L responsibilities.

Defining Product Capabilities

  • Understand gaps in the existing products within the domain and how customers address them today.
  • Understand your firm’s strengths and weaknesses.
  • Research the strengths and weaknesses of your current and potential competitors.
  • Define the smallest set of requirements for the initial (or next) release (minimum viable product).
  • When defining product requirements, balance long-term strategic needs with short-term tactical ones.
  • Understand your solutions key benefits and unique value proposition.

Strategic Market Segmentation

  • Market segmentation often accounts for geography, customer size or industry verticals.
  • Devise a way of grouping customers based on the similarities and differences of their needs.
  • Also account for the similarities in your capabilities, such as channel reach or support abilities.
  • Determine which market segments you’re targeting.
  • Understand similarities and differences between the segments in terms of needs and business dynamics.
  • Consider how you’ll reach prospective customers in each market segment.

Engagement with the Sales Team

  • Understand the nature and size of the sales force aligned with your product.
  • Explore the applicability and nature of a reseller channel or OEM partnerships for product growth.
  • Understand sales incentives pertaining to your product and, if applicable, attempt to adjust them.
  • Look for misalignments, such as recurring SaaS product pricing vs. traditional quarterly sales goals.
  • Assess what other products are “competing” for the sales team’s attention, if applicable.
  • Determine the nature of support you can offer the sales team to train or otherwise support their efforts.
  • Gather sales’ negative and positive feedback regarding the product.
  • Understand which market segments and use-cases have gained the most traction in the product’s sales.

The Pricing Model

  • Understand the value that customers in various segments place on your product.
  • Determine your initial costs (software, hardware, personnel, etc.) related to deploying the product.
  • Compute your ongoing costs related to maintaining the product and supporting its users.
  • Decide whether you will charge customers recurring or one-time (plus maintenance) fees for the product.
  • Understand the nature of customers’ budgets, including any CapEx vs. OpEx preferences.
  • Define the approach to offering volume pricing discounts, if applicable.
  • Define the model for compensating the sales team, including resellers, if applicable.
  • Establish the pricing schedule, setting the priced based on perceived value.
  • Account for the minimum desired profit margin.

Product Delivery and Operations

  • Understand the intricacies of deploying the solution.
  • Determine the effort required to operate, maintain and support the product on an ongoing basis.
  • Determine for the technical steps, personnel, tools, support requirements and the associated costs.
  • Document the expectations and channels of communication between you and the customer.
  • Establish the necessary vendor relationship for product delivery, if necessary.
  • Clarify which party in the relationship has which responsibilities for monitoring, upgrades, etc.
  • Allocate the necessary support, R&D, QA, security and other staff to maintain and evolve the product.
  • Obtain the appropriate audits and certifications.

Product Management at Startups

  • Ability and need to iterate faster to get feedback
  • Willingness and need to take higher risks
  • Lower bureaucratic burden and red tape
  • Much harder to reach customers
  • Often fewer resources to deliver on the roadmap
  • Fluid designation of responsibilities

Product Management at Large Firms

  • An established sales organization, which provides access to customers
  • Potentially-conflicting priorities and incentives with groups and individuals within the organization
  • Rigid organizational structure and bureaucracy
  • Potentially-easier access to funding for sophisticated projects and complex products
  • Possibly-easier access to the needed expertise
  • Well-defined career development roadmap

Post-Scriptum

Authored by Lenny Zeltser, who’ve been responsible for product management of information security solutions at companies large and small. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at hybrid-analysis.com, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Tips for Reverse-Engineering Malicious Code

This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. To print it, use the one-page PDF version; you can also edit the Word version to customize it for you own needs.

Overview of the Code Analysis Process

  1. Examine static properties of the Windows executable for initial assessment and triage.
  2. Identify strings and API calls that highlight the program’s suspicious or malicious capabilities.
  3. Perform automated and manual behavioral analysis to gather additional details.
  4. If relevant, supplement our understanding by using memory forensics techniques.
  5. Use a disassembler for static analysis to examine code that references risky strings and API calls.
  6. Use a debugger for dynamic analysis to examine how risky strings and API calls are used.
  7. If appropriate, unpack the code and its artifacts.
  8. As your understanding of the code increases, add comments, labels; rename functions, variables.
  9. Progress to examine the code that references or depends upon the code you’ve already analyzed.
  10. Repeat steps 5-9 above as necessary (the order may vary) until analysis objectives are met.

Common 32-Bit Registers and Uses

EAX Addition, multiplication, function results
ECX Counter; used by LOOP and others
EBP Baseline/frame pointer for referencing function arguments (EBP+value) and local variables (EBP-value)
ESP Points to the current “top” of the stack; changes via PUSH, POP, and others
EIP Instruction pointer; points to the next instruction; shellcode gets it via call/pop
EFLAGS Contains flags that store outcomes of computations (e.g., Zero and Carry flags)
FS F segment register; FS[0] points to SEH chain, FS[0x30] points to the PEB.

Common x86 Assembly Instructions

mov EAX,0xB8 Put the value 0xB8 in EAX.
push EAX Put EAX contents on the stack.
pop EAX Remove contents from top of the stack and put them in EAX .
lea EAX,[EBP-4] Put the address of variable EBP-4 in EAX.
call EAX Call the function whose address resides in the EAX register.
add esp,8 Increase ESP by 8 to shrink the stack by two 4-byte arguments.
sub esp,0x54 Shift ESP by 0x54 to make room on the stack for local variable(s).
xor EAX,EAX Set EAX contents to zero.
test EAX,EAX Check whether EAX contains zero, set the appropriate EFLAGS bits.
cmp EAX,0xB8 Compare EAX to 0xB8, set the appropriate EFLAGS bits.

Understanding 64-Bit Registers

  • EAX→RAX, ECX→RCX, EBX→RBX, ESP→RSP, EIP→RIP
  • Additional 64-bit registers are R8-R15.
  • RSP is often used to access stack arguments and local variables, instead of EBP.
  • |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| R8 (64 bits)
    ________________________________|||||||||||||||||||||||||||||||| R8D (32 bits)
    ________________________________________________|||||||||||||||| R8W (16 bits)
    ________________________________________________________|||||||| R8B (8 bits)

Passing Parameters to Functions

arg0 [EBP+8] on 32-bit, RCX on 64-bit
arg1 [EBP+0xC] on 32-bit, RDX on 64-bit
arg2 [EBP+0x10] on 32-bit, R8 on 64-bit
arg3 [EBP+14] on 32-bit, R9 on 64-bit

Decoding Conditional Jumps

JA / JG Jump if above/jump if greater.
JB / JL Jump if below/jump if less.
JE / JZ Jump if equal; same as jump if zero.
JNE / JNZ Jump if not equal; same as jump if not zero.
JGE/ JNL Jump if greater or equal; same as jump if not less.

Some Risky Windows API Calls

  • Code injection: CreateRemoteThread, OpenProcess, VirtualAllocEx, WriteProcessMemory, EnumProcesses
  • Dynamic DLL loading: LoadLibrary, GetProcAddress
  • Memory scraping: CreateToolhelp32Snapshot, OpenProcess, ReadProcessMemory, EnumProcesses
  • Data stealing: GetClipboardData, GetWindowText
  • Keylogging: GetAsyncKeyState, SetWindowsHookEx
  • Embedded resources: FindResource, LockResource
  • Unpacking/self-injection: VirtualAlloc, VirtualProtect
  • Query artifacts: CreateMutex, CreateFile, FindWindow, GetModuleHandle, RegOpenKeyEx
  • Execute a program: WinExec, ShellExecute, CreateProcess
  • Web interactions: InternetOpen, HttpOpenRequest, HttpSendRequest, InternetReadFile

Additional Code Analysis Tips

  • Be patient but persistent; focus on small, manageable code areas and expand from there.
  • Use dynamic code analysis (debugging) for code that’s too difficult to understand statically.
  • Look at jumps and calls to assess how the specimen flows from “interesting” code block to the other.
  • If code analysis is taking too long, consider whether behavioral or memory analysis will achieve the goals.
  • When looking for API calls, know the official API names and the associated native APIs (Nt, Zw, Rtl).

Post-Scriptum

Authored by Lenny Zeltser with feedback from Anuj Soni. Malicious code analysis and related topics are covered in the SANS Institute course FOR610: Reverse-Engineering Malware, which they’ve co-authored. This cheat sheet, version 1.0, is released under the Creative Commons v3 “Attribution” License.

How to Deploy Your Own Algo VPN Server in the DigitalOcean Cloud

When performing security research or connecting over untrusted networks, it’s often useful to tunnel connections through a VPN in a public cloud. This approach helps conceal your origin and safeguard your traffic, contributing to OPSEC when interacting with malicious infrastructure or traversing hostile environments. Moreover, by using VPN exit nodes in different cities and even countries, the researcher can explore the target from multiple geographic vantage points, which sometimes yields additional findings.

One way to accomplish this is to set up your own VPN server in a public cloud, as an alternative to relying on a commercial VPN service. The following tutorial explains how to deploy the Algo VPN software bundle on DigitalOcean (the link includes my referral code). I like using DigitalOcean for this purpose because it offers virtual private server instances for as little as $5 per month; also, I find it easier to use than AWS.

Algo VPN Overview

Algo VPN is an open source software bundle designed for self-hosted IPSec VPN services. It was designed by the folks at Trail of Bits to be easy to deploy, rely only on modern protocols and ciphers and provide reasonable security defaults. Also, it doesn’t require dedicated VPN client software for connecting from most systems and devices, because of native IPSec support.

To understand why its creators believe Algo VPN is a better alternative to commercial VPNs, the Streisand VPN bundle and OpenVPN, read the blog post that announced Algo’s initial release.  As outlined in the post, Algo VPN is meant “to be easy to set up. That way, you start it when you need it, and tear it down before anyone can figure out the service you’re routing your traffic through.”

Creating a DigitalOcean Virtual Private Server

To obtain an Internet-accessible system where you’ll install Algo VPN server software, you can create a “droplet” on DigitalOcean running Ubuntu with a few clicks. Do do that, click the dropdown button below the Ubuntu icon on the DigitalOcean “Create Droplets” page, then select the 18.04 option, as shown below. (Don’t use 16.04 due to a possible DNS issue.)

Accepting default options for the droplet should be OK in most cases. If you’re not planning to tunnel a lot of traffic through the system, selecting the least expensive size will probably suffice. Select the geographic region where the Virtual Private Server will run based on your requirements. Assign a hostname that appeals to you.

Once the new host is active, make a note of the public IP address that DigitalOccean assigns to it and log into it using SSH. Then run the following commands inside the new virtual private server to update its OS and install Algo VPN core prerequisites:

apt-add-repository -y ppa:ansible/ansible
apt-get update -y
apt-get upgrade -y
apt-get install -y build-essential \
  libssl-dev \
  libffi-dev \
  python-dev \
  python-pip \
  python-setuptools \
  python-virtualenv

At this point you could harden the configuration of the virtual private server, but these steps are outside the scope of this guide.

Installing Algo VPN Server Software

Next, obtain the latest Algo VPN server software on the newly-setup droplet and prepare for the installation by executing the following commands:

git clone https://github.com/trailofbits/algo
cd algo
python -m virtualenv env
source env/bin/activate
python -m pip install -U pip
python -m pip install -r requirements.txt

Set up the username for the people who will be using the VPN. To accomplish this, use your favorite text editor, such as Nano or Vim to edit the config.cfg file in the ~/algo directory:

vim config.cfg

Remove the lines that represent the default users “dan” and “jack” and add your own (e.g., “john”), so that the corresponding section of the file looks like this:

users:
 - john

After saving the file and exiting the text editor, execute the following command in the ~/algo directory to install Algo software:

./algo

When prompted by the installer, select the option to install “to existing Ubuntu 16.04 server”. Note that you’ll be selecting “16.04” even thought you’re actually installing Algo VPN on Ubuntu version 18.04.

When proceeding with the installer, you should be OK  in most cases by accepting default answers with a few exceptions:

  • When asked about the public IP address of the server, enter the IP address assigned to the virtual private server by DigitalOcean when you created the droplet.
  • If planning to VPN from Windows 10 or Linux desktop client systems, answer “Y” to the corresponding question.

After providing the answers, give the installer a few minutes to complete its tasks. (Be patient.) Once it finishes, you’ll see the “Congratulations!” message, stating that your Algo VPN server is running.

Be sure to capture the “p12 and SSH keys password for new users” that the installer will display at the end as part of the congratulatory message, because you will need to use it later. Store it in a safe place, such as your password vault.

Configuring VPN Clients

Once you’ve set up the Alog VPN service, follow the instructions on the Algo VPN website to configure your VPN client. The steps are different for each OS. Fortunately, the Algo setup process generates files that allow you to accomplish this with relative ease. It stores the files in under ~/algo/configs in a subdirectory whose name matches your server’s IP address.

For instance, to configure your iOS device, transfer your user’s Apple Profile file that has the .mobileconfig extension (e.g., john.mobileconfig) to the device, then open the file to install it. Once this is done, you can go to Settings > VPN on your iOS device to enable the VPN when you wish to use it. If at some point you wish to delete this VPN profile, go to General > Profile.

If setting up the VPN client on Windows 10, retrieve from the Algo VPN server your user’s file with the .ps1 extension (e.g., windows_john.ps1). Then, open the Administrator shell on the Windows system and execute the following command from the folder where you’ve placed these files, adjusting the file name to match your name:

powershell -ExecutionPolicy ByPass -File windows_john.ps1 -Add

When prompted, supply the  p12 password that the Algo VPN installer displayed at the end of the installation. This will import the appropriate certificate information and create the VPN connection entry. To connect to the VPN server, go to Settings > Network & Internet > VPN. If you wish to remove the VPN entry, use the PowerShell command above, replacing “-Add” with “-Remove”.

Additional Considerations for Algo VPN

Before relying on VPN to safeguard your interactions with malicious infrastructure, be sure to confirm that it’s concealing the necessary aspects of your origin. If it’s working properly, the remote host should see the IP address of your VPN servers, instead of the IP address of your VPN client. Similarly, your DNS traffic should be getting directed through the VPN tunnel, concealing your client’s locally-configured DNS server. One way to validate this is to use whoer.net, comparing what information the site reveals before and after you activate your VPN connection. Also, confirm that you’re not leaking your origin over IPv6; one way to do that is by connecting to ipv6leak.com.

You can turn off the virtual private server when you don’t need it. When you boot it up again, Algo VPN software will automatically launch in the background. If running the server for longer periods of time, you should implement security measures necessary appropriate for Internet-connected infrastructure.

As you use your Algo VPN server, adversaries might begin tracking the server’s IP address and eventually blacklist it. Therefore, it’s a good idea to periodically destroy this DigitalOcean droplet and create a new one from scratch. This will not only change the server’s IP address, but also ensure that you’re running the latest version of VPN software and its dependencies. Unfortunately, after you do this, you’ll need to reimport VPN client profiles to match the new server’s IP address and certificate details.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Security is Not, and Should not be Treated as, a Special Flower

My normal Wednesday lunch yesterday was rudely interrupted by my adequate friend and reasonable security advocate Javvad calling me to ask my opinion on something. This in itself was surprising enough, but the fact that I immediately gave a strong and impassioned response told me this might be something I needed to explore further… The UK … Read More

A Glimpse at Petya Ransomware

Ransomware has become an increasingly serious threat. Cryptowall, TeslasCrypt and Locky are just some of the ransomware variants that infected large numbers of victims. Petya is the newest strain and the most devious among them.

Petya will not only encrypt files but it will make the system completely useless, leaving the victim no choice but to pay for the ransom, and it will encrypt filesystem’s Master File Table, which leaves the operating system unable to load. MFT is an essential file in NTFS file system. It contains every file record and directory on NTFS logical volume. Each record contains all the particulars that the operating system need to boot properly.

Like any other malware, Petya is widely distributed via a job application spear-phishing email that comes with a Dropbox link luring the victim by claiming the link contains self-extracting CV; in fact, it contains self-extracting executable that would later unleash its malicious behavior.

Petya dropper

Petya’s dropper

Petya's infection behavior

Petya’s infection behavior

 Petya ransomware has two infection stages. The first stage is MBR infection and encryption key generation, including the decryption code used in ransom messages. The second stage is MFT encryption.

First Stage of Encryption

First infection stage behavior

First infection stage behavior

An MBR infection is made through straightforward \\.\PhysicalDrive0 manipulation with the help of DeviceIOControl API. It first retrieves the physical location of the root drive \\.\c by sending IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code to the device driver.  Then it sends the extended disk partition info of \\.\PhysicalDrive0 through IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS control code.

GET_VOLUME_Data

The dropper will encrypt the original MBR using XOR opcode and 0x37 and save it for later use. It will also create 34 disk sectors containing 0x37. Right after the 34 sectors are Petya’s MFT infecting code. Located on Sector 56 is the original encrypted MBR.

Infected disk view

Infected disk view

Infected disk view

Infected disk view

Original Encrypted MBR

Original Encrypted MBR

After the MBR infection, it will intentionally crash the system by triggering NTRaiseHardError. This will trigger BSOD and the system will start, which will cause the machine to load using the infected MBR.

Code snippet triggering BSOD

Code snippet in triggering BSOD

BSOD

BSOD

Once we inspected the dumped image of the disk, we discovered it was showing a fake CHKDSK screen. We will also see the ransom message and ASCII skull art.

Dumped disk image

Dumped disk image

Second Infection Stage

The stage 2 infection code is written in 16-bit architecture, which uses BIOS interrupt calls.

Upon system boot up, it will load into memory Petya’s malicious code, which is located at sector 34. It will first determine if the system is already infected by checking the first byte at sector is 0x0. If not infected, it will display fake CHKDSK.

Fake CHKDSK

Fake CHKDSK

When someone sees the Figure 8, it means that the MFT table is already encrypted using salsa20 algorithm.

Figure 8

The victim will see this screen upon boot.

The victim will see this screen upon boot.

Ransom message and instructions

Ransom message and instructions

Petya Ransomware Page

The webpage for the victim to access their personal decryption key is protected against bots and contains information about when the Petya ransomware project was launched, warnings on what not to do when recovering files and an FAQ page. The page is surprisingly very user friendly and shows the days left before the ransom price will be doubled.

Ransom page captcha

Ransom page captcha

 Petya’s homepage

Petya’s homepage

It also contains news feeds, including different blogs and news from AV companies warning about Petya.

News 1 Figure 13

News 2

They also provide a step-by-step process on how to pay the ransom, including instructions on how to purchase bitcoin. Support via web is included too in case the victim encounters problems in the transaction they’ve made. Petya’s ransom is a lot cheaper compared to other ransomware, too.

Petya web page 1

Petya web page 2

Petya web page 3

Petya web page 4

On Step 4 of the payment procedure, the “next” button is disabled until they’ve confirmed that they already received the payment.

Petya support page

Petya’s support page

Below is a shot of ThreatTrack’s ThreatSecure Network dashboard catching Petya. Tools like ThreatSecure can detect and disrupt attacks in real time.

ThreatSecure Network catching Petya ransomware

ThreatSecure Network catching Petya ransomware

 

The post A Glimpse at Petya Ransomware appeared first on ThreatTrack Security Labs Blog.

“And the winner is… Compliance!”

Disclaimer: My comments below are based upon quotes from both Twitter and The Times of London on the UK’s TalkTalk breach; as a result the subsequent investigation and analysis may find that some of the assertions are in fact incorrect. I will post clarifying statements should this happen to be the case. I am not … Read More

To Reform and Institutionalize Research for Public Safety (and Security)

On October 3rd, 2014 a petition appeared on the Petitions.WhiteHouse.gov website titled "Unlocking public access to research on software safety through DMCA and CFAA reform". I encourage you to go read the text of the petition yourself.

While I believe that on the whole the CFAA and more urgently the DMCA need dramatic reforms if not to be flat-out dumped, I'm just not sure I'm completely on board with there this idea is going. I've discussed my displeasure for the CFAA on a few of our recent podcasts if you follow our Down the Security Rabbithole Podcast series, and I would likely throw a party if the DMCA were repealed tomorrow - but unlocking "research" broadly is dangerous.

There is no doubt in my mind that security research is critical in exposing safety and security issues in matters that affect the greater public good. However, let's not confuse legitimate research with thinly veiled extortion or a license to hack at will. We can all remember the incident Apple had where a hacker purportedly had exposed a flaw in their online forums, then to prove his point he exploited the vulnerability and extracted data of real users. All in the name of "research" right? I don't think so.

You see, what a recent conversation with Shawn Tuma has taught me is that under the CFAA we have one of these "I'll know it when I see it" conditions where a prosecuting attorney can choose to either go after someone, or look the other way if they believe they were acting in good faith and for the public good... or some such. This type of power makes me uncomfortable as it gives that prosecuting attorney way too much room. Room for what you ask? How about room to be swayed by a big corporation... I'm looking at you AT&T.

Let me lay out a scenario for you. Say you are a security professional interested in home automation and alarm systems. You purchase one, and begin to conduct research into the types of vulnerabilities one of these things is open to - since you'll be installing it in your home and all. You uncover some major design flaws, and maybe even a way to remotely disable the home alarm feature on thousands of units across the country. You want to notify the company, get them to fix the issue, and maybe get a little by-line credit for it. Only the company slaps a DMCA law suit on you for reverse engineering their product and you're in hot water. Clearly they have more money and attorneys than you do. Your choices are few - drop the research or face criminal prosecution. Odds are you're not even getting a choice.

In that scenario - it's clear that reforms are needed. Crystal clear, in fact.

The issue is we need to protect legitimate research from prosecutorial malfeasance while still allowing for laws to protect intellectual property and a company's security. So you see, the issue isn't as simple as opening up research, but much more subtle and deliberate.

How do we limit the law and protect legitimate research, while allowing for the protections companies still deserve? I think the answer lies in how we define a researcher. I propose that we require researchers to declare their research and its intent and draft ethical guidelines which can be agreed upon (and enforced on both ends) between the researcher and the organization being researched. There must be rules of engagement, and rules for "responsible and coordinated disclosure". The laws must be tweaked to shield the researched with declared intent and following the rules of engagement from being prosecuted by a company which is simply trying to skirt responsibility for safety, privacy and security. Furthermore, there must be provisions for matters that affect the greater good - which companies simply cannot opt out of.

Now, if you ask me if I believe this will happen any time soon, that's another matter entirely. Big companies will use their lobbying power to make sure this type of reform never happens, because it simply doesn't serve their self-interest. Having seen first-hand the inner workings of a large enterprise technology company - I know exactly how much profit is valued over security (or anything else, really). Profit now, and maybe no one will notice the big gaping holes later. That's just how it is in real life. But when public safety comes into play I think we will see a few major incidents where we have loss of life directly attributed to security flaws before we see any sort of reform. Of course when we do have serious incidents, they'll simply go after the hackers and shed any responsibility. That's just how these things work.

So in closing - I think there is a lot of work to be done here. First we need to more closely define and create formal understanding of security research. Once we're comfortable with that, we need to refine the CFAA and maybe get rid of the DMCA - to legitimize security research into the areas that affect public safety, privacy and security.

The Evolution of Mobile Security

Today, I posted a blog entry to the Oracle Identity Management blog titled Analyzing How MDM and MAM Stack Up Against Your Mobile Security Requirements. In the post, I walk through a quick history of mobile security starting with MDM, evolving into MAM, and providing a glimpse into the next generation of mobile security where access is managed and governed along with everything else in the enterprise. It should be no surprise that's where we're heading but as always I welcome your feedback if you disagree.

Here's a brief excerpt:
Mobile is the new black. Every major analyst group seems to have a different phrase for it but we all know that workforces are increasingly mobile and BYOD (Bring Your Own Device) is quickly spreading as the new standard. As the mobile access landscape changes and organizations continue to lose more and more control over how and where information is used, there is also a seismic shift taking place in the underlying mobile security models.
Mobile Device Management (MDM) was a great first response by an Information Security industry caught on its heels by the overwhelming speed of mobile device adoption. Emerging at a time when organizations were purchasing and distributing devices to employees, MDM provided a mechanism to manage those devices, ensure that rogue devices weren’t being introduced onto the network, and enforce security policies on those devices. But MDM was as intrusive to end-users as it was effective for enterprises.
Continue Reading

RSA Conference 2014

I'm at the RSA Conference this week. I considered the point of view that perhaps there's something to be said for abstaining this year but ultimately my decision to maintain course was based on two premises: (1) RSA didn't know the NSA had a backdoor when they made the arrangement and (2) The conference division doesn't have much to do with RSA's software group.

Anyway, my plan is to take notes and blog or tweet about what I see. Of course, I'll primarily be looking at Identity and Access technologies, which is only a subset of Information Security. And I'll be looking for two things: Innovation and Uniqueness. If your company has a claim on either of those in IAM solutions, please try to catch my attention.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.

Virtual Directory as Database Security

I've written plenty of posts about the various use-cases for virtual directory technology over the years. But, I came across another today that I thought was pretty interesting.

Think about enterprise security from the viewpoint of the CISO. There are numerous layers of overlapping security technologies that work together to reduce risk to a point that's comfortable. Network security, endpoint security, identity management, encryption, DLP, SIEM, etc. But even when these solutions are implemented according to plan, I still see two common gaps that need to be taken more seriously.

One is control over unstructured data (file systems, SharePoint, etc.). The other is back door access to application databases. There is a ton of sensitive information exposed through those two avenues that aren't protected by the likes of SIEM solutions or IAM suites. Even DLP solutions tend to focus on perimeter defense rather than who has access. STEALTHbits has solutions to fill the gaps for unstructured data and for Microsoft SQL Server so I spend a fair amount of time talking to CISOs and their teams about these issues.

While reading through some IAM industry materials today, I found an interesting write-up on how Oracle is using its virtual directory technology to solve the problem for Oracle database customers. Oracle's IAM suite leverages Oracle Virtual Directory (OVD) as an integration point with an Oracle database feature called Enterprise User Security (EUS). EUS enables database access management through an enterprise LDAP directory (as opposed to managing a spaghetti mapping of users to database accounts and the associated permissions.)

By placing OVD in front of EUS, you get instant LDAP-style management (and IAM integration) without a long, complicated migration process. Pretty compelling use-case. If you can't control direct database permissions, your application-side access controls seem less important. Essentially, you've locked the front door but left the back window wide open. Something to think about.

Performing Clean Active Directory Migrations and Consolidations


Active Directory Migration Challenges

Over the past decade, Active Directory (AD) has grown out of control. It may be due to organizational mergers or disparate Active Directory domains that sprouted up over time, but many AD administrators are now looking at dozens of Active Directory forests and even hundreds of AD domains wondering how it happened and wishing it was easier to manage on a daily basis.

One of the top drivers for AD Migrations is enablement of new technologies such as unified communications or identity and access management. Without a shared and clearly articulated security model across Active Directory domains, it’s extremely difficult to leverage AD for authentication to new business applications or to establish the related business rules that may be based on AD attributes or security group memberships.

Domain consolidation is not a simple task. Whether you're moving from one platform to another, doing some AD security remodeling, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?

One of the biggest fears in Active Directory migration projects is that business users will lose access to their critical resources during the migration. To reduce the likelihood of that occurring, many project leaders choose to enable a dirty migration; they enable historical SIDs which carry old credentials and group memberships from the source domain and apply them to the new domain. Unfortunately, enabling historical SIDs proliferates one of the main challenges that initially drove the migration project. The dirty migration approach maintains the various security models that have been implemented over the years making AD difficult to manage and near impossible to understand who has what rights across the environment.

Clean Active Directory Migrations

The alternative to a dirty migration is to disallow historical SIDs and thereby enable a clean migration where rights are applied as-needed in an easy-to-manage and well articulated security model. Security groups are applied on resources according to an intentional model that is defined up-front and permissions are limited to a least-privilege model where only those who require rights actually get them.

All consolidation or migration projects aren't the same. The motivations differ, the technologies differ, and the Active Directory organizational structure and assets differ wildly. Most solutions on the market provide point A to point B migrations of Active Directory assets. This type of migration often contributes to making the problem worse over time. There's nothing wrong with using an Active Directory tool to help you perform an AD forest or domain migration, but knowing which assets to move and how to structure or even restructure them in the target domain is critical.

Enabling a clean migration and transforming the Active Directory security model requires a few steps to be followed. It starts with assessment and cleanup of the source Active Directory environments. You should assess what objects are out there, how they’re being used, and how they’re currently organized. Are there dormant user accounts or unused computer objects? Are there groups with overlapping membership? Are there permissions that are unused or inappropriate? Are there toxic or high-risk conditions in the environment? This type of intelligence enables visibility into which objects you need to move, how they're structured, how the current domain compares to the target domain, and where differences exist in GPO policies, schema, and naming conventions. The dormant and unused objects as well as any toxic or high-risk conditions can be remediated so that those conditions aren’t propagated to the target environment.

Once the initial assessment and cleanup is complete, a gap-analysis should be performed to understand where the current state differs from the intended model. Where possible, the transformation should be automated. Security groups can be created, for example, based on historical user activity so that group membership is determined by actual need. This is a key requirement for numerous legal regulations.

The next step is to perform a deep scan into the Active Directory forests and domains that will be consolidated and look at server-level permissions and infrastructure across Active Directory, File Systems, Security Policies, SharePoint, SQL Server, and more. This enables the creation of business rules that will transform existing effective permissions into the target model while adhering to new naming conventions and group utilization. Much of this transformation should be automated to avoid human error and reduce effort.

Maintaining a Clean Active Directory

Once the migration or consolidation project is complete and adherence to the intended security model has been enforced, it’s vital that a program is in place to maintain Active Directory in its current state. There are a few capabilities that can help achieve this goal.

First, a mandatory periodic audit should be enforced. Security Group owners should confirm that groups are being used as-intended. Resource owners should confirm that the right people have the right level of access to their resources. Business managers should confirm that their people have access to the right resources. These reviews should be automated and tracked to ensure that these reviews are completely thoroughly and on-time.

Second, tools should be implemented that provide visibility into the environment answering questions as they come up. When a security administrator needs to see how a user is being granted rights to something they should perhaps not have, they’ll need tools that provide answers in a timely fashion.

Third, a system-wide scan should be conducted regularly to identify any toxic or high-risk conditions that occur over time. For example, if a user account becomes dormant, notification should be sent out according to business rules. Or if a group is nested within itself perhaps ten layers deep, you want an automated solution to discover that condition and provide related reporting.

Finally, to ensure adherence to Active Directory security policies, a real-time monitoring solution should be put in place to enforce rules, prevent unwanted changes via event blocking, and to maintain an audit trail of critical administrative activity.

Complete visibility across the entire Active Directory infrastructure enables a clean AD domain consolidation while making life easier for administrators, improving security, and enabling adoption of new technologies

About the Author

Matt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.

Reduce Risk by Monitoring Active Directory

Active Directory (AD) plays a central role in securing networked resources. It typically serves as the front gate allowing access to the network environment only when presented with valid credentials. But Active Directory credentials also serve to grant access to numerous resources within the environment. For example, AD group memberships are commonly used to manage access to unstructured data resources such as file systems and SharePoint sites. And a growing number of enterprise applications leverage AD credentials to grant access to their resources as well.

Active Directory Event Monitoring Challenges

Monitoring and reporting on Active Directory accounts, security groups, access rights, administrative changes, and user behavior can feel like a monumental task. Event monitoring requires an understanding of which events are critical, where those events occur, what factors might indicate increased risk, and what technologies are available to capture those events.

Understanding which events to ignore is as important and knowing which are critical to capture. You don't need immediate alerts on every AD User or Group change which takes place but you want visibility into critical high-risk changes: Who is adding AD user accounts? ...adding a user to an administrative AD group? ...making Group Policy (GPO) changes?

Active Directory administrators face a complex challenge that requires visibility into events as well as infrastructure to ensure proper system functionality. A complete AD monitoring solution doesn't stop at user and group changes. It also looks at Domain Controller status: which services are running, disk space issues, patch levels, and similar operational and infrastructure needs. There are numerous technical requirements to get that level of detail.

AD administrators require full access in the environment which presents another set of challenges. How do you enable administrators to do their job while controlling certain high-risk activity such as snooping on sensitive data or accidentally making GPO changes to important security policies? Monitoring Active Directory effectively includes either preventing unintended activities through change blocking or deterring activities through visible monitoring and alerting.

Monitoring Active Directory Effectively

Effective audit and monitoring solutions for Active Directory address the numerous challenges discussed above by providing a flexible platform that covers typical scenarios out-of-the-box without customization but also allows extensibility to accommodate the unique requirements of the environment.

Data collection is the cornerstone of any Active Directory monitoring and audit solution. Collection must be automated, reliable, and non-intrusive on the target environment. Data that can be collected remotely without agents should be. But, when requirements call for at-the-source monitoring, for example when you want to see WHO did it, what machine they came from, capture before-and-after values, or block certain activities, a real-time agent should be available to accommodate those needs. The data collection also needs to scale to the environment’s size and performance requirements.

Once data has been collected, both batch and real-time per-event analysis are required to meet common requirements. For example, you may want an alert on changes to administrative groups but you don’t want alerts on all group changes. Or you may want a report that highlights all empty groups or groups with improper nesting conditions. This analysis should provide intelligence out-of-the-box based on industry expertise and commonly requested reporting. But it should also enable unique business questions to be answered. Every organization uses Active Directory in unique ways and custom reporting is an extremely common requirement.

Finally, once data collection and analysis phases have been completed, AD monitoring solutions should provide a flexible reporting interface that provides access to the intelligence that has been cultivated. As with collection and analysis, the reporting functionality should include commonly requested reports with no customization but should also enable report customization and extensibility. Reporting should include web-accessible reports, search and filtering, access to the raw and post-analysis data, and email or other alerting.

An effective Active Directory monitoring solution provides deep insight on all things Active Directory. It should enable user, group and GPO change detection as well as reporting on anomalies and high-risk conditions. It should also provide deep analysis on users, groups, OUs, computer objects, and Active Directory infrastructure. Because the types of reports required by different teams (such as security and operations) may differ, it may be prudent to provide slightly different interfaces or report sets for the various intended audiences.

When real-time monitoring of Active Directory Users, Groups, OUs, and other changes (including activity blocking) are important, the solution should provide advanced filtering and response on nearly all Active Directory events as well as an audit trail of changes and attempts with all relevant information.

Benefits of Active Directory Monitoring

The three most common business drivers for Active Directory monitoring are improved security, improved audit response, and simplified administration. Active Directory audit and monitoring solutions make life easier for administrators while improving security across the network environment. This is especially important as AD becomes increasingly integrated into enterprise applications.
Some common use-cases include:
  • Monitor Active Directory user accounts for create, modify and delete events. Capture the user account making the change along with the affected account information, changed attributes, time stamp, and more. This monitoring capability acts independent of the Security Event log and is non-reputable.
  • Monitor Active Directory group memberships and provide reports and/or alerts in real time when memberships change on important groups such as the Domain Admins group.
  • Report on failed attempts in addition to successful attempts. Filter on specific types of events and ignore others.
  • Report on Active Directory dormant accounts, empty groups, unused groups, large groups, and other high-risk conditions to empower administrators with actionable information.
  • Automate event response based on policy with email alerts, remediation processes, or record the event to a file or database.
Active Directory Monitoring and Reporting doesn't need to feel complicated or overwhelming. Solutions are available to simplify the process while providing increased security and reduced risk.

About the Author

Matt Flynn has been in the Identity & Access Management space for more than a decade. He’s currently a Product Manager at STEALTHbits Technologies where he focuses on Data & Access Governance solutions for many of the world’s largest, most prestigious organizations. Prior to STEALTHbits, Matt held numerous positions at NetVision, RSA, MaXware, and Unisys where he was involved in virtually every aspect of identity-related projects from hands-on technical to strategic planning. In 2011, SYS-CON Media added Matt to their list of the most powerful voices in Information Security.

Unstructured Data into Identity & Access Governance

I've written before about the gap in identity and access management solutions related to unstructured data.

When I define unstructured data to people in the Identity Management space, I think the key distinguishing characteristic is that there is no entitlement store with which an IAM or IAG solution can connect to gather entitlement information. 

On File Systems, for example, the entitlements are distributed across shares & folders, inherited through the file tree structure, applied through group memberships that may be many levels deep, and there's no common security model to make sense of it.

STEALTHbits has the best scanner in the industry (I've seen it go head-to-head in POC's) to gather users, groups, and permissions across unstructured data environments and the most flexible ability to perform analysis that (1) uncovers high-risk conditions (such as open file shares, unused permissions, admin snooping, and more), (2) identifies content owners, and (3) makes it very simple to consume information on entitlements (by user, by group, or by resource).

It's a gap in the identity management landscape and it's beginning to show up on customer agendas. Let us know if we can help. Now, here's a pretty picture:

STEALTHbits adds unstructured data into IAM and IAG solutions.

Active Directory Unification

[This is a partial re-post of an entry on the STEALTHbits blog. I think it's relevant here and open for discussion on the concepts surrounding clean migrations and AD unification.]

It’s no secret that over the past decade, Active Directory has grown out of control across many organizations. It’s partly due to organizational mergers or disparate Active Directory domains that sprouted up over time, but you may find yourself looking at dozens or even hundreds of Active Directory domains and realize that it's time to consolidate. And it probably feels overpowering. But despite the effort in front of you, there’s an easy way and a right way.

Domain consolidation is not a simple task. Whether you're moving from one platform to another, trying to implement a new security model, or just consolidating domains for improved management and reduced cost, there are numerous steps, lots of unknowns and an overwhelming feeling that you might be missing something. Sound familiar?

According to Gartner analyst Andrew Walls, “The allure of a single AD forest with a simple domain design is not fool’s gold. There are real benefits to be found in a consolidated AD environment. A shared AD infrastructure enables user mobility, common user provisioning processes, consolidated reporting, unified management of machines, etc.

Walls goes on to discuss the politics, cost justification, and complexity of these projects noting that “An AD consolidation has to unite and rationalize the ID formats, password policy objects, user groups, group policy objects, schema designs and application integration methods that have grown and spread through all of the existing AD environments. At times, this can feel like spring cleaning at the Aegean stables. Of course, if you miss something, users will not be able to log in, or find their file shares, or access applications. No pressure.

Walls offers advice on how to avoid some of the pain. “You fight proliferation of AD at every turn and realize that consolidation is not a onetime event. The optimal design for AD is a single domain within a single forest. Any deviation from this approach should be justified on the basis of operational requirements that a unified model cannot possibly support.

What does this mean for you? Well, the most significant take-away from Walls’ advise is that it’s not a onetime event. AD Unification is an ongoing effort. You don’t simply move objects from point-A to point-B and then pack it in for the day. The easy way fails to meet the core objectives of an improved security model, simplified management, reduced cost, and a common provisioning process (think integration with Identity Management solutions).

If you take everything from three source domains and simply move it all to a target domain, you haven’t achieved any of the objectives other than now having a single Active Directory. There’s a good chance that your security model will remain fragmented, management will become more difficult, and your user provisioning processes will require additional logic to accommodate for the new mess. On a positive note, if this model is your intent, there are numerous solutions on the market that will help.

STEALTHbits, of course, embraces the right way. “Control through Visibility” is about improving your security posture and your ability to manage IT by increasing your visibility into the critical infrastructure.


If you'd like to learn more about the solution, you can start by reading the rest of this blog entry or contact STEALTHbits.

Data Protection ROI

I came across a couple of interesting articles today related to ROI around data protection. I recently wrote a whitepaper for STEALTHbits on the Cost Justification of Data Access Governance. It's often top of mind for security practitioners who know they need help but have trouble justifying the acquisition and implementation costs of related solutions. Here's today's links:

KuppingerCole -
The value of information – the reason for information security

Verizon Business Security -
Ask the Data: Do “hacktivists” do it differently?

Visit the STEALTHbits site for information on Access Governance related to unstructured data and to track down the paper on cost justification.

Filling the Gap in Identity and Access Governance

Identity and Access Management: Filling the Gap in Identity and Access Governance

Traditional identity solutions focus on access to applications, but that misses as much as 80 percent of corporate data.

We’ve entered the age of access governance. Organizations need to know who has access to what data and how they were granted that access. Identity and Access Governance (IAG) solutions address these issues while managing enterprise access. They provide visibility into access, policy and role management, and risk assessment—and they facilitate periodic entitlement reviews of access across numerous systems. Most enterprise IAG solutions are missing a key piece to the puzzle, though: unstructured data.

[Read the full article in TechNet Magazine]