Category Archives: automation

Organizations need to understand risks and ethics related to AI

Despite highly publicized risks of data-sharing and AI, from facial recognition to political deepfakes, leadership at many organizations seems to be vastly underestimating the ethical challenges of the technology, NTT DATA Services reveals. Just 12% of executives and 15% of employees say they believe AI will collect consumer data in unethical ways, and only 13% of executives and 19% of employees say AI will discriminate against minority groups. Surveying 1,000 executive-level and non-executive employees across … More

The post Organizations need to understand risks and ethics related to AI appeared first on Help Net Security.

Defense in Diversity

Security has always claimed that “Defense in Depth” is the dominant strategy. As we enter the world of automated workloads at internet-scale, it has become clear that it is in fact “Defense in Diversity” that wins over depth. When dealing with large-scale automated attacks, iteration over the same defense a million times is cheap. However, attacking a million defenses that are slightly different is costly for the threat actor.

It then comes down to this: How can you raise the cost to your adversary’s observations and actions without raising your cost equally as the defender?

As human beings, we have a cognitive limit on things like recollection, working memory, dimensional space, etc. Operating outside of any one of these parameters can be viewed as beyond our peripheral cognition. Machines, however, have no problem operating outside these boundaries, being able to compute, analyze, and respond on a vastly greater scale. That being said, machines also need proper input and interaction from the human element in order to maximize efficiency and help determine things like known good and bad behavior on a network.

The first step to achieving “Defense in Diversity” is learning to identify what elements of your approach to security are human-scale problems and which are machine-scale problems.

Diversity is the countermeasure to Determinism. Extreme forms of diversity are feasible for machines but infeasible for humans, so we need to be careful in its application in our systems. By keeping these human-level versus machine-level constraints and capabilities in mind, we need to design automation that has machine-scale diversity and operational capacity while still being able to be operated at the human-scale by the defenders.

In order to effectively combat an increasingly strategic and varied set of threats, security professionals need to take a more varied approach to defense. While repetitive and static use of an effective technique or tool might keep some adversaries at a disadvantage, or even force some of them to give up outright, at some point, your organization is going to come across an attacker that not only recognizes your defense patterns, but also knows how to counter or even circumvent them, leaving you defenseless and open for attack.

Take a moment to consider the following: What aspects of your processes or automation techniques could a threat actor use against you?  Just because you can automate something for security, does not mean you should.  Our systems are becoming more and more automation-rich as we move from human-scale operations to machine-scale operations. However, it is paramount that we understand how to automate safely and not to the advantage of our attackers. AI and ML learning are an invaluable part of our set of defensive techniques, but there are still some scenarios where human-scale ingenuity and reasoning are vital to keeping our information secure.

I encourage you to take some time to assess your organization’s current approach to security and ask yourself some important questions:

  • How deterministic are your defense methods?
  • Are there any methods that you’re currently using that threat actors might be able to abuse or overcome? How would you know threat actors have taken control?
  • What set of processes are human-scale? (manually executed)
  • What set of processes are machine-scale? (automated by machines)

Recognizing how to efficiently balance the human and AI/ML components in your organization and understanding the advantages each provide will allow you to better defend against threats and allow you to seize victory against whatever foes come your way.

The post Defense in Diversity appeared first on Cisco Blogs.

This could be the key to your return to work strategy

Data analytics can play an important role in helping organizations come up with safe plans for their employees to return to the workplace, according to a data expert. The workplace is going to look very different, with some employees continuing to work from home, said Archana Ramamoorthy, Chief Technology Officer for Workday, at recent ITWC…

The post This could be the key to your return to work strategy first appeared on IT World Canada.

How tech trends and risks shape organizations’ data protection strategy

Trustwave released a report which depicts how technology trends, compromise risks and regulations are shaping how organizations’ data is stored and protected. Data protection strategy The report is based on a recent survey of 966 full-time IT professionals who are cybersecurity decision makers or security influencers within their organizations. Over 75% of respondents work in organizations with over 500 employees in key geographic regions including the U.S., U.K., Australia and Singapore. “Data drives the global … More

The post How tech trends and risks shape organizations’ data protection strategy appeared first on Help Net Security.

How Automation can help you in Managing Data Privacy

The global data privacy landscape is changing and everyday we can see new regulations emerge.

These regulations are encouraging organizations to be better custodians of the consumers data and create a healthier space for data privacy. In order to do so organizations will need to rework their operations and revamp their processes in order to comply with these regulations.

According to a report by the International Association of Privacy Professionals, 33% of respondents have considered revamping their technology solutions around data privacy. This is where data privacy comes into play and organizations are looking for data privacy management softwares that can fulfill their data privacy needs, while complying with data regulations in order to avoid fines.

Tracking Personal Data

Data is stored in a plethora of internal and external systems in structured or unstructured form all across the organization. These systems can even spread over a geographical area depending on the size of the organization. In order to retrieve information, manual methods can be seen as tedious and time-consuming, not to mention the factor of human error.

According to Aoife Harney, Compliance Manager at AON, “One of the most important aspects of any data protection program is having an in-depth and documented knowledge of the what, the why, the where, the who, and the how.”

Different data privacy softwares that incorporate data intelligence serve various purposes in the organization. Certain softwares deal with cookies and consent, while others could focus on breach notification.

Now a days, organizations need all in one privacy management software platform that can address all these requirements and integrate data privacy within all their operations:

Compliance Requirements

Data privacy regulations such as the CCPA and GDPR require organizations to take responsibility for their consumers’ data. All data privacy regulations impose obligations on businesses for the protection of privacy of consumers by restricting data capture mechanisms, providing privacy rights to consumers on their personal data and introducing accountability in businesses data policies. Furthermore it imposes responsibilities on data controllers who store and hold data to protect it from unauthorized disclosures and to inform consumers when and if their data is breached.

In order to comply with these obligations organizations need to revamp the following practices to stay in compliance with global data privacy regulations.

  • DSR Fulfillment: Organizations will be met with a plethora of Data Subject requests and will be required to fulfill them all in a specific time frame based on the regulations they are required to comply with. In order to make this process swift and seamless, organizations will have to automate their DSR fulfillment process.
  • Data Mapping: Organizations have stored immense amounts of data over their internal and external systems that can spread across on a geographic level. In order to quickly link this data to the owner to avoid any delays, data mapping automation plays a quintessential part in complying with any data privacy regulation.
  • Vendor Assessment: Manually assessing your third-party vendors and your own organization can be a tedious task that can present several bottlenecks and lack in collaboration. Whether you want to collaborate with key stakeholders or third-party vendors, there needs to be an automated system that can bring about this automation while simplifying the assessment process.
  • Consent Management: Regulations such as the CCPA and GDPR require organizations to take freely given consent from their consumers before processing their data. Doing this task manually leaves room for human error and also the use of time and resources. Organizations need to create a universal consent capture system that can make this process faster while freeing up resources as well.
  • Breach Notification: Privacy regulations require organizations to send a notification in case of a breach. Under the GDPR, for example,an obligatory 72-hour data breach notice for unauthorized access to systems and data, use and distribution of data is mandatory (Article 33). Recognizing a breach and then sending out a notification through manual means makes it virtually impossible to comply with the time frame given. Automating your breach notification system can save organizations thousands in fines.
  • Privacy Policy Management: One of the core parts of any regulation is the need to revamp an organization’s privacy policies. These policies need to be in line with the data privacy regulations in order to comply. Organizations will need to revisit their privacy policies and change them according to the guidelines provided by these privacy regulations.

Automation: the Future of Compliance

The future beckon the arrival of automation and organizations will have to quickly adopt this if they hope to improve their chances at complying with global privacy regulations. Irrespective of the current state of the globe, data regulations are still going into effect and being enforced. If an organization hopes to comply with these regulations they need to find a solution that will automate their operations and manage all the aforementioned privacy requirements.

Aoife Harney says “Being able to clearly see when a client’s personal data was collected, what legal basis is relied upon for that activity, who accesses that information, and when it’s appropriate to erase is incredibly useful to any organization,” 

Organizations need to find a solution that will help them with their compliance requirements. The ideal situation would be to get this solution from an organization that allows flexibility and customization, as well as one that considers your suggestions from early adopters.

Organizations can also consider SECURITI.ai which is reputed as the Privacy Leader that offers a one-stop data privacy solution to businesses.

Authors:

Ramiz Shah, Digital Content Producer at SECURITI.ai

Anas Baig, Team Lead at SECURITI.ai

Pierluigi Paganini

(SecurityAffairs – hacking, automation)

The post How Automation can help you in Managing Data Privacy appeared first on Security Affairs.

SecOps teams turn to next-gen automation tools to address security gaps

SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals. Growing deployment of next-gen tools and capabilities The report’s findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or … More

The post SecOps teams turn to next-gen automation tools to address security gaps appeared first on Help Net Security.

Global adoption of data and privacy programs still maturing

The importance of privacy and data protection is a critical issue for organizations as it transcends beyond legal departments to the forefront of an organization’s strategic priorities. A FairWarning research, based on survey results from more than 550 global privacy and data protection, IT, and compliance professionals outlines the characteristics and behaviors of advanced privacy and data protection teams. By examining the trends of privacy adoption and maturity across industries, the research uncovers adjustments that … More

The post Global adoption of data and privacy programs still maturing appeared first on Help Net Security.

Most cybersecurity pros believe automation will make their jobs easier

Despite 88% of cybersecurity professionals believing automation will make their jobs easier, younger staffers are more concerned that the technology will replace their roles than their veteran counterparts, according to a research by Exabeam. Overall, satisfaction levels continued a 3-year positive trend, with 96% of respondents indicating they are happy with role and responsibilities and 87% reportedly pleased with salary and earnings. Additionally, there was improvement in gender diversity with female respondents increasing from 9% … More

The post Most cybersecurity pros believe automation will make their jobs easier appeared first on Help Net Security.

Is the skills gap preventing you from executing your enterprise strategy?

As many business leaders look to close the skills gap and cultivate a sustainable workforce amid COVID-19, an IBM Institute for Business Value (IBV) study reveals less than 4 in 10 human resources (HR) executives surveyed report they have the skills needed to achieve their enterprise strategy. COVID-19 exacerbated the skills gap in the enterprise Pre-pandemic research in 2018 found as many as 120 million workers surveyed in the world’s 12 largest economies may need … More

The post Is the skills gap preventing you from executing your enterprise strategy? appeared first on Help Net Security.

Banks risk losing customers with anti-fraud practices

Many banks across the U.S. and Canada are failing to meet their customers’ online identity fraud and digital banking needs, according to a survey from FICO. Despite COVID-19 quickly turning online banking into an essential service, the survey found that financial institutions across North America are struggling to establish practices that combat online identity fraud and money laundering, without negatively impacting customer experience. For example, 51 percent of North American banks are still asking customers … More

The post Banks risk losing customers with anti-fraud practices appeared first on Help Net Security.

How Tripwire Custom Workflow Automation Can Enhance Your Network Visibility

Tripwire Enterprise is a powerful tool. It provides customers insight into nearly every aspect of their systems and devices. From change management to configuration and compliance, Tripwire can provide “eyes on” across the network. Gathering that vast amount of data for analysis does not come without challenges. Customers have asked for better integration with their […]… Read More

The post How Tripwire Custom Workflow Automation Can Enhance Your Network Visibility appeared first on The State of Security.

SCANdalous! (External Detection Using Network Scan Data and Automation)

Real Quick

In case you’re thrown by that fantastic title, our lawyers made us change the name of this project so we wouldn’t get sued. SCANdalous—a.k.a. Scannah Montana a.k.a. Scanny McScanface a.k.a. “Scan I Kick It? (Yes You Scan)”—had another name before today that, for legal reasons, we’re keeping to ourselves. A special thanks to our legal team who is always looking out for us, this blog post would be a lot less fun without them. Strap in folks.

Introduction

Advanced Practices is known for using primary source data obtained through Mandiant Incident Response, Managed Defense, and product telemetry across thousands of FireEye clients. Regular, first-hand observations of threat actors afford us opportunities to learn intimate details of their modus operandi. While our visibility from organic data is vast, we also derive value from third-party data sources. By looking outwards, we extend our visibility beyond our clients’ environments and shorten the time it takes to detect adversaries in the wild—often before they initiate intrusions against our clients.

In October 2019, Aaron Stephens gave his “Scan’t Touch This” talk at the annual FireEye Cyber Defense Summit (slides available on his Github). He discussed using network scan data for external detection and provided examples of how to profile command and control (C2) servers for various post-exploitation frameworks used by criminal and intelligence organizations alike. However, manual application of those techniques doesn’t scale. It may work if your role focuses on one or two groups, but Advanced Practices’ scope is much broader. We needed a solution that would enable us to track thousands of groups, malware families and profiles. In this blog post we’d like to talk about that journey, highlight some wins, and for the first time publicly, introduce the project behind it all: SCANdalous.

Pre-SCANdalous Case Studies

Prior to any sort of system or automation, our team used traditional profiling methodologies to manually identify servers of interest. The following are some examples. The success we found in these case studies served as the primary motivation for SCANdalous.

APT39 SSH Tunneling

After observing APT39 in a series of intrusions, we determined they frequently created Secure Shell (SSH) tunnels with PuTTY Link to forward Remote Desktop Protocol connections to internal hosts within the target environment. Additionally, they preferred using BitVise SSH servers listening on port 443. Finally, they were using servers hosted by WorldStream B.V.

Independent isolation of any one of these characteristics would produce a lot of unrelated servers; however, the aggregation of characteristics provided a strong signal for newly established infrastructure of interest. We used this established profile and others to illuminate dozens of servers we later attributed to APT39, often before they were used against a target.

APT34 QUADAGENT

In February 2018, an independent researcher shared a sample of what would later be named QUADAGENT. We had not observed it in an intrusion yet; however, by analyzing the characteristics of the C2, we were able to develop a strong profile of the servers to track over time. For example, our team identified the server 185.161.208\.37 and domain rdppath\.com within hours of it being established. A week later, we identified a QUADAGENT dropper with the previously identified C2. Additional examples of QUADAGENT are depicted in Figure 1.


Figure 1: QUADAGENT C2 servers in the Shodan user interface

Five days after the QUADAGENT dropper was identified, Mandiant was engaged by a victim that was targeted via the same C2. This activity was later attributed to APT34. During the investigation, Mandiant uncovered APT34 using RULER.HOMEPAGE. This was the first time our consultants observed the tool and technique used in the wild by a real threat actor. Our team developed a profile of servers hosting HOMEPAGE payloads and began tracking their deployment in the wild. Figure 2 shows a timeline of QUADAGENT C2 servers discovered between February and November of 2018.


Figure 2: Timeline of QUADAGENT C2 servers discovered throughout 2018

APT33 RULER.HOMEPAGE, POSHC2, and POWERTON

A month after that aforementioned intrusion, Managed Defense discovered a threat actor using RULER.HOMEPAGE to download and execute POSHC2. All the RULER.HOMEPAGE servers were previously identified due to our efforts. Our team developed a profile for POSHC2 and began tracking their deployment in the wild. The threat actor pivoted to a novel PowerShell backdoor, POWERTON. Our team repeated our workflow and began illuminating those C2 servers as well. This activity was later attributed to APT33 and was documented in our OVERRULED post.

SCANdalous

Scanner, Better, Faster, Stronger

Our use of scan data was proving wildly successful, and we wanted to use more of it, but we needed to innovate. How could we leverage this dataset and methodology to track not one or two, but dozens of active groups that we observe across our solutions and services? Even if every member of Advanced Practices was dedicated to external detection, we would still not have enough time or resources to keep up with the amount of manual work required. But that’s the key word: Manual. Our workflow consumed hours of individual analyst actions, and we had to change that. This was the beginning of SCANdalous: An automated system for external detection using third-party network scan data.

A couple of nice things about computers: They’re great at multitasking, and they don’t forget. The tasks that were taking us hours to do—if we had time, and if we remembered to do them every day—were now taking SCANdalous minutes if not seconds. This not only afforded us additional time for analysis, it gave us the capability to expand our scope. Now we not only look for specific groups, we also search for common malware, tools and frameworks in general. We deploy weak signals (or broad signatures) for software that isn’t inherently bad, but is often used by threat actors.

Our external detection was further improved by automating additional collection tasks, executed by SCANdalous upon a discovery—we call them follow-on actions. For example, if an interesting open directory is identified, acquire certain files. These actions ensure the team never misses an opportunity during “non-working hours.” If SCANdalous finds something interesting on a weekend or holiday, we know it will perform the time-sensitive tasks against the server and in defense of our clients.

The data we collect not only helps us track things we aren’t seeing at our clients, it allows us to provide timely and historical context to our incident responders and security analysts. Taking observations from Mandiant Incident Response or Managed Defense and distilling them into knowledge we can carry forward has always been our bread and butter. Now, with SCANdalous in the mix, we can project that knowledge out onto the Internet as a whole.

Collection Metrics

Looking back on where we started with our manual efforts, we’re pleased to see how far this project has come, and is perhaps best illustrated by examining the numbers. Today (and as we write these continue to grow), SCANdalous holds over five thousand signatures across multiple sources, covering dozens of named malware families and threat groups. Since its inception, SCANdalous has produced over two million hits. Every single one of those, a piece of contextualized data that helps our team make analytical decisions. Of course, raw volume isn’t everything, so let’s dive a little deeper.

When an analyst discovers that an IP address has been used by an adversary against a named organization, they denote that usage in our knowledge store. While the time at which this observation occurs does not always correlate with when it was used in an intrusion, knowing when we became aware of that use is still valuable. We can cross-reference these times with data from SCANdalous to help us understand the impact of our external detection.

Looking at the IP addresses marked by an analyst as observed at a client in the last year, we find that 21.7% (more than one in five) were also found by SCANdalous. Of that fifth, SCANdalous has an average lead time of 47 days. If we only consider the IP addresses that SCANdalous found first, the average lead time jumps to 106 days. Going even deeper and examining this data month-to-month, we find a steady upward trend in the percentage of IP addresses identified by SCANdalous before being observed at a client (Figure 3).


Figure 3: Percentage of IP addresses found by SCANdalous before being marked as observed at a client by a FireEye analyst

A similar pattern can be seen for SCANdalous’ average lead time over the same data (Figure 4).


Figure 4: Average lead time in days for SCANdalous over the same data shown in Figure 3

As we continue to create signatures and increase our external detection efforts, we can see from these numbers that the effectiveness and value of the resulting data grow as well.

SCANdalous Case Studies

Today in Advanced Practices, SCANdalous is a core element of our external detection work. It has provided us with a new lens through which we can observe threat activity on a scale and scope beyond our organic data, and enriches our workflows in support of Mandiant. Here are a few of our favorite examples:

FIN6

In early 2019, SCANdalous identified a Cobalt Strike C2 server that we were able to associate with FIN6. Four hours later, the server was used to target a Managed Defense client, as discussed in our blog post, Pick-Six: Intercepting a FIN6 Intrusion, an Actor Recently Tied to Ryuk and LockerGoga Ransomware.

FIN7

In late 2019, SCANdalous identified a BOOSTWRITE C2 server and automatically acquired keying material that was later used to decrypt files found in a FIN7 intrusion worked by Mandiant consultants, as discussed in our blog post, Mahalo FIN7: Responding to the Criminal Operators’ New Tools and Techniques.

UNC1878 (financially motivated)

Some of you may also remember our recent blog post on UNC1878. It serves as a great case study for how we grow an initial observation into a larger set of data, and then use that knowledge to find more activity across our offerings. Much of the early work that went into tracking that activity (see the section titled “Expansion”) happened via SCANdalous. The quick response from Managed Defense gave us just enough information to build a profile of the C2 and let our automated system take it from there. Over the next couple months, SCANdalous identified numerous servers matching UNC1878’s profile. This allowed us to not only analyze and attribute new network infrastructure, it also helped us observe when and how they were changing their operations over time.

Conclusion

There are hundreds more stories to tell, but the point is the same. When we find value in an analytical workflow, we ask ourselves how we can do it better and faster. The automation we build into our tools allows us to not only accomplish more of the work we were doing manually, it enables us to work on things we never could before. Of course, the conversion doesn’t happen all at once. Like all good things, we made a lot of incremental improvements over time to get where we are today, and we’re still finding ways to make more. Continuing to innovate is how we keep moving forward – as Advanced Practices, as FireEye, and as an industry.

Example Signatures

The following are example Shodan queries; however, any source of scan data can be used.

Used to Identify APT39 C2 Servers

  • product:“bitvise” port:“443” org:“WorldStream B.V.”

Used to Identify QUADAGENT C2 Servers

  • “PHP/7.2.0beta2”

RULER.HOMEPAGE Payloads

  • html:“clsid:0006F063-0000-0000-C000-000000000046”