Author Archives: Todd VanderArk

Best practices for adding layered security to Azure security with Check Point’s CloudGuard IaaS

The cloud is changing the way we build and deploy applications. Most enterprises will benefit from the cloud’s many advantages through hybrid, multi, or standalone cloud architectures. A recent report showed that 42 percent of companies have a multi-cloud deployment strategy.

The advantages of the cloud include flexibility, converting large upfront infrastructure investments to smaller monthly bills (for example, the CAPEX to OPEX shift), agility, scalability, the capability to run applications and workloads at high speed, as well as high levels of reliability and availability.

However, cloud security is often an afterthought in this process. Some worry that it may slow the momentum of organizations that are migrating workloads into the cloud. Traditional IT security teams may be hesitant to implement new cloud security processes, because to them the cloud may be daunting or confusing, or just new and unknown.

Although the concepts may seem similar, cloud security is different than traditional enterprise security. Additionally, there may also be industry-specific compliance and security standards to be met.

Public cloud vendors have defined the Shared Responsibility Model where the vendor is responsible for the security “of” their cloud, while their customers are responsible for the security “in” the cloud.

Image showing teh Responsibility Zones for Microsoft Azure.

The Shared Responsibility Model (Source: Microsoft Azure).

Cloud deployments include multi-layered components, and the security requirements are often different per layer and per component. Often, the ownership of security is blurred when it comes to the application, infrastructure, and sometimes even the cloud platform—especially in multi-cloud deployments.

Cloud vendors, including Microsoft, offer fundamental network-layer, data-layer, and other security tools for use by their customers. Security analysts, managed security service providers, and advanced cloud customers recommend layering on advanced threat prevention and network-layer security solutions to protect against modern-day attacks. These specialized tools evolve at the pace of industry threats to secure the organization’s cloud perimeters and connection points.

Check Point is a leader in cloud security and the trusted security advisor to customers migrating workloads into the cloud.

Check Point’s CloudGuard IaaS helps protect assets in the cloud with dynamic scalability, intelligent provisioning, and consistent control across public, private, and hybrid cloud deployments. CloudGuard IaaS supports Azure and Azure Stack. Customers using CloudGuard IaaS can securely migrate sensitive workloads, applications, and data into Azure and thereby improve their security.

But how well does CloudGuard IaaS conform to Microsoft’s best practices?

Principal Program Manager of Azure Networking, Dr. Reshmi Yandapalli (DAOM), published a blog post titled Best practices to consider before deploying a network virtual appliance earlier this year, which outlined considerations when building or choosing Azure security and networking services. Dr. Yandapalli defined four best practices for networking and security ISVs—like Check Point—to improve the cloud experience for Azure customers.

I discussed Dr. Yandapalli’s four best practices with Amir Kaushansky, Check Point’s Head of Cloud Network Security Product Management. Amir’s responsibilities include the CloudGuard IaaS roadmap and coordination with the R&D/development team.

1. Azure accelerated networking support

Dr. Yandapalli’s first best practice in her blog is that the ISV’s Azure security solution is available on one or more Azure virtual machine (VM) type with Azure’s accelerated networking capability to improve networking performance. Dr. Yandapalli recommends that you “consider a virtual appliance that is available on one of the supported VM types with Azure’s accelerated networking capability.”

The diagram below shows communication between VMs, with and without Azure’s accelerated networking:

Image showing accelerated networking to improve performance of Azure security.

Accelerated networking to improve performance of Azure security (Source: Microsoft Azure).

Kaushansky says, “Check Point was the first certified compliant vendor with Azure accelerated networking. Accelerated networking can improve performance and reduce jitter, latency, and CPU utilization.”

According to Kaushansky—and depending on workload and VM size—Check Point and customers have observed at least a 2-3 times increase in throughput due to Azure accelerated networking.

2. Multi-Network Interface Controller (NIC) support

Dr. Yandapalli’s blog’s next best practice is to use VMs with multiple NICs to improve network traffic management via traffic isolation. For example, you can use one NIC for data plane traffic and one NIC for management plane traffic. Dr. Yandapalli states, “With multiple NICs you can better manage your network traffic by isolating various types of traffic across the different NICs.”

The diagram below shows the Azure Dv2-series with maximum NICs per VM size:

Image showing Azure Dv2-series VMs with number of NICs per size.

Azure Dv2-series VMs with # NICs per size.

CloudGuard IaaS supports multi-NIC VMs, without any maximum of the number of NICs. Check Point recommends the use of VMs with at least two NICs—VMs with one NIC are supported but not recommended.

Depending on the customer’s deployment architecture, the customer may use one NIC for internal East-West traffic and the second for outbound/inbound North-South traffic.

3. High Availability (HA) port with Azure load balancer

The Dr. Yandapalli’s third best practice is that Azure security and networking services should be reliable and highly available.

Dr. Yandapalli suggests the use of a High Availability (HA) port load balancing rule. “You would want your NVA to be reliable and highly available, to achieve these goals simply by adding network virtual appliance instances to the backend pool of your internal load balancer and configuring a HA ports load-balancer rule,” says Dr. Yandapalli.

The diagram below shows an example usage of a HA port:

Flowchart example of a HA port with Azure load balancer.

Kaushansky says, “CloudGuard IaaS supports this functionality with a standard load balancer via Azure Resource Manager deployment templates, which customers can use to deploy CloudGuard IaaS easily in HA mode.”

4. Support for Virtual Machine Scale Sets (VMSS)

The Dr. Yandapalli’s last best practice is to use Azure VMSS to provide HA. These also provide the management and automation layers for Azure security, networking, and other applications. This cloud-native functionality provides the right amount of IaaS resources at any given time, depending on application needs. Dr. Yandapalli points out that “scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs.”

In a similar way to the previous best practice, customers can use an Azure Resource Manager deployment template to deploy CloudGuard in VMSS mode. Check Point recommends the use of VMSS for traffic inspection of North-South (inbound/outbound) and East-West (lateral movement) traffic.

Learn more and get a free trial

As you can see from the above, CloudGuard IaaS is compliant with all four of Microsoft’s common best practices for how to build and deploy Azure network security solutions.

Visit Check Point to understand how CloudGuard IaaS can help protect your data and infrastructure in Microsoft Azure and hybrid clouds and improve Azure network security. If you’re evaluating Azure security solutions, you can get a free 30-day evaluation license of CloudGuard IaaS on Azure Marketplace!

(Based on a blog published on June 4, 2019 in the Check Point Cloud Security blog.)

The post Best practices for adding layered security to Azure security with Check Point’s CloudGuard IaaS appeared first on Microsoft Security.

Guarding against supply chain attacks—Part 1: The big picture

Every day, somewhere in the world, governments, businesses, educational organizations, and individuals are hacked. Precious data is stolen or held for ransom, and the wheels of “business-as-usual” grind to a halt. These criminal acts are expected to cost more than $2 trillion in 2019, a four-fold increase in just four years. The seeds that bloom into these business disasters are often planted in both hardware and software systems created in various steps of your supply chain, propagated by bad actors and out-of-date business practices.

These compromises in the safety and integrity of your supply chain can threaten the success of your business, no matter the size of your operation. But typically, the longer your supply chain, the higher the risk for attack, because of all the supply sources in play.

In this blog series, “Guarding against supply chain attacks,” we examine various components of the supply chain, the vulnerabilities they present, and how to protect yourself from them.

Defining the problem

Supply chain attacks are not new. The National Institute of Standards and Technology (NIST) has been focused on driving awareness in this space since 2008. And this problem is not going away. In 2017 and 2018, according to Symantec, supply chain attacks rose 78 percent. Mitigating this type of third-party risk has become a major board issue as executives now understand that partner and supplier relationships pose fundamental challenges to businesses of all sizes and verticals.

Moreover, for compliance reasons, third-party risk also continues to be a focus. In New York State, Nebraska, and elsewhere in the U.S., third-party risk has emerged as a significant compliance issue.

Throughout the supply chain, hackers look for weaknesses that they can exploit. Hardware, software, people, processes, vendors—all of it is fair game. At its core, attackers are looking to break trust mechanisms, including the trust that businesses naturally have for their suppliers. Hackers hide their bad intentions behind the shield of trust a supplier has built with their customers over time and look for the weakest, most vulnerable place to gain entry, so they can do their worst.

According to NIST, cyber supply chain risks include:

  • Insertion of counterfeits.
  • Unauthorized production of components.
  • Tampering with production parts and processes.
  • Theft of components.
  • Insertion of malicious hardware and software.
  • Poor manufacturing and development practices that compromise quality.

Cyber Supply Chain Risk Management (C-SCRM) identifies what the risks are and where they come from, assesses past damage and ongoing and future risk, and mitigates these risks across the entire lifetime of every system.

This process examines:

  • Product design and development.
  • How parts of the supply chain are distributed and deployed.
  • Where and how they are acquired.
  • How they are maintained.
  • How, at end-of-life, they are destroyed.

The NIST approach to C-SCRM considers how foundational practices and risk are managed across the whole organization.

Examples of past supply chain attacks

The following are examples of sources of recent supply chain attacks:

Hardware component attacks—When you think about it, OEMs are among the most logical places in a supply chain which an adversary will likely try to insert vulnerabilities. For example, in 2018, an unidentified major telecommunications company in the U.S. uncovered hardware manufactured by a subcontractor in China for Super Micro Computer Inc., a California-based company. These parts which were manufactured in China and assumed to have been tampered with by the Chinese intelligence service.

Software component attacks—Again in 2016, Chinese hackers purportedly attacked TeamViewer software, which was a potential virtual invitation to view and access information on the computers of millions of people all over the world who use this program.

People perpetrated attacks—People are a common connector between the various steps and entities in any supply chain and are subject to the influence of corrupting forces. Nation-states or other “cause-related” organizations prey on people susceptible to bribery and blackmail. In 2016, the Indian tech giant, Wipro, had three employees arrested in a suspected security breach of customer records for the U.K. company TalkTalk.

Business processes—Business practices (including services), both upstream and downstream, are also examples of vulnerable sources of infiltration. For example, experienced an exposed database when one of its customers did not adequately protect a web server storing resumes, which contain emails and physical addresses, along with other personal information, including immigration records. This and other issues can be avoided if typical business practices such as risk profiling and assessment services are in place and are regularly reviewed to make sure they comply with changing security and privacy requirements. This includes policies for “bring your own” IoT devices, which are another fast-growing vulnerability.

Big picture practical advice

Here’s some practical advice to take into consideration:

Watch out for copycat attacks—If a data heist worked with one corporate victim, it’s likely to work with another. This means once a new weapon is introduced into the supply chain, it is likely to be re-used—in some cases, for years.

To prove the point, here are some of the many examples of cybercrimes that reuse code stolen from legal hackers and deployed by criminals.

  • The Conficker botnet MS10-067 is over a decade old and is still found on millions of PCs every month.
  • The criminal group known as the Shadow Brokers used the Eternal Blue code designed by the U.S. National Security Agency as part of their cybersecurity toolkit. When the code was leaked illegally and sold to North Korea, they used it to execute WannaCry in 2017, which spread to 150 countries and infected over 200,000 computers.
  • Turla, a purportedly Russian group, has been active since 2008, infecting computers in the U.S. and Europe with spyware that establishes a hidden foothold in infected networks that searches for and steals data.

Crafting a successful cyberattack from scratch is not a simple undertaking. It requires technical know-how, resources to create or acquire new working exploits, and the technique to then deliver the exploit, to ensure that it operates as intended, and then to successfully remove information or data from a target.

It’s much easier to take a successful exploit and simply recycle it—saving development and testing costs, as well as the costs that come from targeting known soft targets (e.g., avoiding known defenses that may detect it). We advise you to stay in the know about past attacks, as any one of them may come your way. Just ask yourself: Would your company survive a similar attack? If the answer is no—or even maybe—then fix your vulnerabilities or at the very least make sure you have mitigation in place.

Know your supply chain—Like many information and operational technology businesses, you probably depend on a global system of suppliers. But do you know where the various technology components of your business come from? Who makes the hardware you use—and where do the parts to make that hardware come from? Your software? Have you examined how your business practices and those of your suppliers keep you safe from bad actors with a financial interest in undermining the most basic components of your business? Take some time to look at these questions and see how you’d score yourself and your suppliers.

Looking ahead

Hopefully, the above information will encourage (if not convince) you to take a big picture look at who and what your supply chain consists of and make sure that you have defenses in place that will protect you from all the known attacks that play out in cyberspace each day.

In the remainder of the “Guarding against supply chain attacks” series, we’ll drill down into supply chain components to help make you aware of potential vulnerabilities and supply advice to help you protect your company from attack.

Stay tuned for these upcoming posts:

  • Part 2—Explores the risks of hardware attacks.
  • Part 3—Examines ways in which software can become compromised.
  • Part 4—Looks at how people and processes can expose companies to risk.
  • Part 5—Summarizes our advice with a look to the future.

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

To learn more about how you can protect your time and empower your team, check out the cybersecurity awareness page this month.

The post Guarding against supply chain attacks—Part 1: The big picture appeared first on Microsoft Security.

Microsoft’s 4 principals for an effective security operations center

The Microsoft Cyber Defense Operations Center (CDOC) fields trillions of security signals every day. How do we identify and respond to the right threats? One thing that won’t surprise you: we leverage artificial intelligence (AI), machine learning, and automation to narrow the focus. But technology is not enough. Our people, culture, and process are just as critical.

You may not have trillions of signals to manage, but I bet you will still get a lot of value from a behind-the-scenes look at the CDOC. Even the small companies that I’ve worked with have improved the effectiveness of their security operations centers (SOCs) based on learnings from Microsoft.

Watch the operations episode of the CISO Spotlight Series—The people behind the cloud to get my take and a sneak peek at our team in action. In the video, I walk you through four principals:

  1. It starts with assessment.
  2. Invest in the right technology.
  3. Hire a diverse group of people.
  4. Foster an innovative culture.

It starts with assessment

Before you make any changes, it helps to identify the gaps in your current security system. Take a look at your most recent attacks to see if you have the right detections in place. Offense should drive your defenses. For example:

  • Has your organization been victim to password spray attacks?
  • Have there been brute force attacks against endpoints exposed to the internet?
  • Have you uncovered advanced persistent threats?

Understanding where your organization is vulnerable will help you determine what technology you need. If you need further help, I would suggest using the MITRE ATT&CK Framework.

Invest in the right technology

As you evaluate technology solutions, think of your security operations as a funnel. At the very top are countless threat signals. There is no way your team can address all of them. This leads to employee burnout and puts the organization at risk. Aim for automation to handle 20-25 percent of incoming events. AI and machine learning can correlate signals, enrich them with other data, and resolve known incidents.

Invest in good endpoint detection, network telemetry, a flexible security incident and event management system (SIEM) like Azure Sentinel, and cloud workload protection solutions. The right technology will reduce the volume of signals that filter down to your people, empowering them to focus on the problems that machines can’t solve.

Hire a diverse group of people

The people you hire matter. I attribute much of our success to the fact that we hire people who love to solve problems. You can model this approach in your SOC. Look for computer scientists, security professionals, and data scientists—but also try to find people with nontraditional backgrounds like military intelligence, law enforcement, and liberal arts. People with a different perspective can introduce creative ways of looking at a problem. For example, Microsoft has had a lot of success with veterans from the military.

I also recommend organizing your SOC into specialized, tiered teams. It gives employees a growth path and allows them to focus on areas of expertise. Microsoft uses a three-tiered approach:

  • Tier 1 analysts—These analysts are the front line. They manage the alerts generated by our SIEM and focus on high-speed remediation over a large number of events.
  • Tier 2 analysts—This team tackles alerts that require a deeper level of analysis. Many of these events have been escalated up from Tier 1, but Tier 2 analysts also monitor alerts to identify and triage the complex cases.
  • Tier 3 analysts—These are the threat hunters. They use sophisticated tools to proactively uncover advanced threats and hidden adversaries.

For a more detailed look at how Microsoft has structured our team, read Lessons learned from the Microsoft SOC—Part 2a: Organizing people

Foster an innovative culture

Culture influences SOC performance by guiding how people treat each other and approach their work. Well-defined career paths and roles are one way to influence your culture. People want to know how their work matters and contributes to the organization. As you build your processes and team, consider how you can encourage innovation, diversity, and teamwork.

Read how the CDOC creates culture in Lessons learned from the Microsoft SOC—Part 1.

Learn more

To learn more about how to run an effective SOC:

The post Microsoft’s 4 principals for an effective security operations center appeared first on Microsoft Security.

Patching as a social responsibility

In the wake of the devastating (Not)Petya attack, Microsoft set out to understand why some customers weren’t applying cybersecurity hygiene, such as security patches, which would have helped mitigate this threat. We were particularly concerned with why patches hadn’t been applied, as they had been available for months and had already been used in the WannaCrypt worm—which clearly established a ”real and present danger.”

We learned a lot from this journey, including how important it is to build clearer industry guidance and standards on enterprise patch management. To help make it easier for organizations to plan, implement, and improve an enterprise patch management strategy, Microsoft is partnering with the U.S. National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE).

NIST and Microsoft are extending an invitation for you to join this effort if you’re a:

  • Vendor—Any vendor who has technology offerings to help with patch management (scan, report, deploy, measure risk, etc.).
  • Organization or individual—All those who have tips and lessons learned from a successful enterprise management program (or lessons learned from failures, challenges, or any other situations).

If you have pertinent learnings that you can share, please reach out to

During this journey, we also worked closely with additional partners and learned from their experience in this space, including the:

  • Center for Internet Security (CIS)
  • U.S. Department of Homeland Security (DHS) Cybersecurity
  • Cybersecurity and Infrastructure Security Agency (CISA) (formerly US-CERT / DHS NCCIC)

A key part of this learning journey was to sit down and listen directly to our customer’s challenges. Microsoft visited a significant number of customers in person (several of which I personally joined) to share what we learned—which became part of the jointly endorsed mitigation roadmap—and to have some really frank and open discussions to learn why organizations really aren’t applying security patches.

While the discussions mostly went in expected directions, we were surprised at how many challenges organizations had on processes and standards, including:

  • “What sort of testing should we actually be doing for patch testing?”
  • “How fast should I be patching my systems?”

This articulated need for good reference processes was further validated by observing that a common practice for “testing” a patch before a deployment often consisted solely of asking whether anyone else had any issues with the patch in an online forum.

This realization guided the discussions with our partners towards creating an initiative in the NIST NCCoE in collaboration with other industry vendors. This project—kicking off soon—will build common enterprise patch management reference architectures and processes, have relevant vendors build and validate implementation instructions in the NCCoE lab, and share the results in the NIST Special Publication 1800 practice guide for all to benefit.

Applying patches is a critical part of protecting your system, and we learned that while it isn’t as easy as security departments think, it isn’t as hard as IT organizations think.

In many ways, patching is a social responsibility because of how much society has come to depend on technology systems that businesses and other organizations provide. This situation is exacerbated today as almost all organizations undergo digital transformations, placing even more social responsibility on technology.

Ultimately, we want to make it easier for everyone to do the right thing and are issuing this call to action. If you’re a vendor that can help or if you have relevant learnings that may help other organizations, please reach out to Now!

The post Patching as a social responsibility appeared first on Microsoft Security.

How to avoid getting caught in a “Groundhog Day” loop of security issues

It’s Cyber Security Awareness Month and it made me think about one of my favorite movies, called Groundhog Day. Have you ever seen it? Bill Murray is the cynical weatherman, Phil Connors, who gets stuck in an endless loop where he repeats the same day over and over again until he “participates in his own rescue” by becoming a better person.

Sometimes it can feel like we’re caught in our own repetitious loops in cybersecurity—I even did a keynote at RSA APJ on this very topic a few years ago. The good news is that we can get out of the loop. By learning lessons from the past and bringing them forward and applying them to today’s technologies, outcomes can be changed—with “change” being the operative word.

If companies continue to do things the same way—in insecure ways—attackers will come along and BOOM you’re in trouble. You may resolve that breach, but that won’t help in the long run. Unless the source of the problem is determined and changed, just like Phil Connors, you’ll wake up one day and BOOM—you’re attacked again.

How security experts can help organizations protect against cybercrime

We can learn from past mistakes. And to prove it, I’d like to cite a heartening statistic. Ransomware encounters decreased by 60 percent between March 2017 and December 2018. While attackers don’t share the specifics about their choice of approach, when one approach isn’t working, they move to another. After all, it’s a business—in fact it’s a successful (and criminal) business—bringing in nearly $200 billion in profits each year.1 We do know that ransomware has less of chance of spreading on fully patched and well-segmented networks and companies are less likely to pay ransoms when they have up-to-date, clean backups to restore from. In other words, it’s very likely that robust cybersecurity hygiene is an important contributor to the decrease in ransomware encounters. (See Lesson 1: Practice good cybersecurity hygiene below.)

The bad news of course is that attackers began to shift their efforts to crimes like cryptocurrency mining, which hijacks victims’ computing resources to make digital money for the attackers.1 But that’s because cybercriminals are opportunists and they’re always searching for the weakest link.

One of the best ways to thwart cybercrime is to involve security experts before deploying new products and/or services. A decade ago, this wasn’t typically done in many organizations. But with the rise of security awareness as part of the overall corporate risk posture, we’re seeing security involved early on in deployments of modern architectures, container deployments, digital transformations, and DevOps.

When security experts connect the wisdom of the past—such as the importance of protecting data in transit with encryption—to the technology rollouts of today, they can help organizations anticipate what could go wrong. This helps you bake controls and processes into your products and services before deployment. The people who have already learned the lessons you need to know can help so you don’t wake up to the same problems every (well, almost) day. When security experts carry those lessons forward, they can help end your Groundhog Day.

In addition, involving security experts early on doesn’t have to slow things down. They can actually help speed things up and prevent backtracking later in the product development cycle to fix problems missed the first time around.

Security can help anticipate problems and produce solutions before they occur. When Wi-Fi networking was first being deployed in the late 1990s, communications were protected with Wired Equivalent Privacy (WEP). But WEP suffered from significant design problems such as the initialization vector (IV) being part of the RC4 encryption key that were already known issues in the cryptographic community. The result was a lot of WEP crackers and the rapid development of the stronger Wi-Fi Protected Access (WPA) set of protocols. If designers had worked with crypto experts, who already had designed a solution free of known issues, time, money, and privacy could have been saved.

Traditional technology thinks about “use” cases. Security thinks about “misuse” cases. Product people focus on the business and social benefits of a solution. Security people think about the risks and vulnerabilities by asking these questions:

  • What happens if the solutions are attacked or used improperly?
  • How is this product or workload going to behave in a non-perfect environment?
  • Where is your system vulnerable and what happens when it comes under attack?

Security also remembers lessons learned while creating threat models to head off common mistakes at the past.

Rita: I didn’t know you could play like that.

Phil: I’m versatile.

Groundhog Day (1993) starring Bill Murray as Phil and Andie McDowell as Rita. Sony Pictures©

Example: Think about designing a car. Cars are cool because they can go fast—really fast. But if you had some security folks on the team, they’d be thinking about the fact that while going fast can be thrilling—you’re going to have to stop at some point.

Security are the kind of thinkers who would probably suggest brakes. And they would make sure that those brakes worked in the rain, snow, and on ice just as well as they worked on dry pavement. Furthermore—because security is obsessed (in a good way) with safety—they would be the ones to plan for contingencies, like having a spare tire and jack in the car in case you get a flat tire.

Learning from and planning for known past issues, like the network equivalent of flat tires, is a very important part of secure cyber design. Machine learning can provide intelligence to help avoid repeats of major attacks. For example, machine learning is very useful in detecting and dismantling fileless malware that lives “off the land” like the recent Astaroth campaign.

Top practices inspired by lessons learned by helping organizations be more secure

Thinking about and modeling for the types of problems that have occurred in the past helps keep systems more secure in the future. For example, we take off our shoes in the airport because someone smuggled explosives onto a plane by hiding it in their footwear.

How DO you stop someone who wants to steal, manipulate, or damage the integrity of your data? What can you do to stop them from trying to monetize it and put your company and customers in jeopardy of losing their privacy? I’m glad you asked—here are four lessons that can help your organization be more secure:

Lesson 1: Practice good cybersecurity hygiene—It may not be shiny and new, but cybersecurity hygiene really matters. This is perhaps the most important lesson we can learn from the past—taking steps to ensure the basics are covered can go a very long way for security. That 60 percent decrease in ransomware encounters globally mentioned earlier is most likely due to better cybersecurity hygiene.

Lesson 2: Schedule regular backups—With regular backups (especially cold backups, held offline), you always have an uncompromised version of your data.

Lesson 3: Use licensed software—Licensed software decreases the likelihood that bugs, worms, and other bad things won’t be infiltrating your infrastructure. Deploying necessary patching that makes systems less vulnerable to exploit is part of keeping the integrity of your licensed software intact.

Lesson 4: Lean into humans “being human” while leveraging technological advances—For example, acknowledge that humans aren’t great at remembering strong passwords, especially when they change frequently. Rather than berating people for their very human brains, focus on developing solutions, such as password wallets and passwordless solutions, which acknowledge how hard strong passwords are to remember without sacrificing security.

Rita: Do you ever have déjà vu?

Phil: Didn’t you just ask me that?

Groundhog Day (1993) Sony Pictures©

Admittedly, we can’t promise there won’t be some share of Groundhog Day repeats. But the point is progress, not perfection. And we are making significant progress in our approach to cybersecurity and resilience. Above are just a couple of examples.

I’d love to hear more from you about examples you may have to share, too! Reach out to me on LinkedIn or Twitter, @DianaKelley14. Also, bookmark the Security blog to keep up with our expert coverage on security matters.

1Cybercrime Profits Total nearly $200 Billion Each Year, Study Reveals

The post How to avoid getting caught in a “Groundhog Day” loop of security issues appeared first on Microsoft Security.

CISO series: Lessons learned from the Microsoft SOC—Part 3a: Choosing SOC tools

The Lessons learned from the Microsoft SOC blog series is designed to share our approach and experience with security operations center (SOC) operations. Our learnings in the series come primarily from Microsoft’s corporate IT security operation team, one of several specialized teams in the Microsoft Cyber Defense Operations Center (CDOC).

Over the course of the series, we’ve discussed how we operate our SOC at Microsoft. In the last two posts, Part 2a, Organizing people, and Part 2b: Career paths and readiness, we discussed how to support our most valuable resources—people—based on successful job performance.

We’ve also included lessons learned from the Microsoft Detection and Response Team (DART) to help our customers respond to major incidents, as well as insights from the other internal SOC teams.

For a visual depiction of our SOC philosophy, download our Minutes Matter poster. To learn more about our Security operations, watch CISO Spotlight Series: The people behind the cloud.

As part of Cybersecurity Awareness month, today’s installment focuses on the technology that enables our people to accomplish their mission by sharing our current approach to technology, how our tooling evolved over time, and what we learned along the way. We hope you can use what we learned to improve your own security operations.

Our strategic approach to technology

Ultimately, the role of technology in a SOC is to help empower people to better contain risk from adversary attacks. Our design for the modern enterprise SOC has moved away from the classic model of relying primarily on alerts generated by static queries in an on-premise security information and event management (SIEM) system. The volume and sophistication of today’s threats have outpaced the ability of this model to detect and respond to threats effectively.

We also found that augmenting this model with disconnected point-solutions lead to additional complexity and didn’t necessarily speed up analysis, prioritization, orchestration, and execution of response action.

Selecting the right technology

Every tool we use must enable the SOC to better achieve its mission and provide meaningful improvement before we invest in purchasing and integrating it. Each tool must also meet rigorous requirements for the sheer scale and global footprint of our environment and the top-shelf skill level of the adversaries we face, as well as efficiently enable our analysts to provide high quality outcomes. The tools we selected support a range of scenarios.

In addition to enabling firstline responders to rapidly remediate threats, we must also enable deep subject matter experts in security and data science to reason over immense volumes of data as they hunt for highly skilled and well-funded nation state level adversaries.

Making the unexpected choice

Even though many of the tools we currently use are made by Microsoft, they still must meet our stringent requirements. All SOC tools—no matter who makes them—are strictly vetted and we don’t hesitate to reject tools that don’t work for our purposes. For example, our SOC rejected Microsoft’s Advanced Threat Analytics tool because of the infrastructure required to scale it up (despite some promising detection results in a pilot). It’s successor, Azure Advanced Threat Protection (Azure ATP) solved this infrastructure challenge by shifting to a SaaS architecture and is now in active use daily.

Our SOC analysts work with Microsoft engineering and third-party tool providers to drive their requirements and provide feedback. As an example, our SOC team has a weekly meeting with the Windows Defender ATP team to review learnings, findings, request features or changes, share engineering progress on requested features, and share attacker research from both teams. Even today, as we roll out Azure Sentinel, our SOC is actively working with the engineering team to ensure key requirements are met, so we can fully retire our legacy SIEM (more details below). Additionally, we regularly invite engineers from our product groups to join us in the SOC to learn how the technology is applied by our experts.

History and evolution to broad and deep tooling

Microsoft’s Corporate IT SOC protects a cross platform environment with a significant population of Windows, Linux, and Macs running a variety of Microsoft and non-Microsoft software. This environment is approximately 95 percent hosted on the cloud today. The tooling used in this SOC has evolved significantly over the years starting from the classic model centered around an on-premises SIEM.

Phase 1—Classic on-premises SIEM-centric model

This is the common model where all event data is fed into an on-premises SIEM where analytics are performed on the data (primarily static queries that were refined over time).

We experienced a set of challenges that we now view as natural limitations of this model. These challenges included:

  • Overwhelming event volume—High volume and growth (on the scale of 20+ billion events a day currently) exceeded the capacity of the on-premises SIEM to handle it.
  • Analyst overload and fatigue—The static rulesets generated excessive amounts of false positive alerts that lead to alert fatigue.
  • Poor investigation workflow—Investigation of events using the SIEM was clunky and required manual queries and manual steps when switching between tools.

Phase 2—Bolster on-premises SIEM weaknesses with cloud analytics and deep tools

We introduced several changes designed to address shortcomings of the classic model.

Three strategic shifts were introduced and included:

1. Cloud based log analytics—To address the SIEM scalability challenges discussed previously, we introduced cloud data lake and machine learning technology to more efficiently store and analyze events. This took pressure off our legacy SIEM and allowed our hunters to embrace the scale of cloud computing to apply advanced techniques like machine learning to reason over the data. We were early adopters of this technology before many current commercial offerings had matured, so we ended up with several “generations” of custom technology that we had to later reconcile and consolidate (into the Log Analytics technology that now powers Azure Sentinel).

Lesson learned: “Good enough” and “supported” is better than “custom.”

Adopt commercial products if they meet at least the “Pareto 80 percent” of your needs because the support of these custom implementations (and later rationalization effort) takes resources and effort away from hunting and other core mission priorities.

2. Specialized high-quality tooling—To address analyst overload and poor workflow challenges, we tested and adopted specialized tooling designed to:

  • Produce high quality alerts (versus high quantity of detailed data).
  • Enable analysts to rapidly investigate and remediate compromised assets.

It is hard to overstate the benefits of this incredibly successful integration of technology. These tools had a powerful positive impact on our analyst morale and productivity, driving significant improvements of our SOC’s mean time to acknowledge (MTTA) and remediate (MTTR).

We attribute a significant amount of this success of these tools to the direct real-world input that was used to design them.

  • SOC—The engineering group spent approximately 18-24 months with our SOC team focused on learning about SOC analyst needs, thought processes, pain points, and more while designing and building the first release of Windows Defender ATP. These teams still stay in touch weekly.
  • DART team—The engineering group directly integrated analysis and hunting techniques that DART developed to rapidly find and evict advanced adversaries from customers.

Here’s a quick summary of the key tools. We’ll share more details on how we use them in our next blog:

  • Endpoint—Microsoft Defender ATP is the default starting point for analysts for almost any investigation (regardless of the source of the alert) because of its powerful visibility and investigation capabilities.
  • Email—Office 365 ATP’s integration with Office 365 Exchange Online helps analysts rapidly find and remove phishing emails from mailboxes. The integration with Microsoft Defender ATP and Azure ATP enables analysts to handle common cases extremely quickly, which lead to growth in our analyst caseload (in a good way ☺).
  • Identity—Integrating Azure ATP helped complete the triad of the most attacked/utilized resources (Endpoint-Email-Identity) and enabled analysts to smoothly pivot across them (and added some useful detections too).
  • We also added Microsoft Cloud App Security and Azure Security Center to provide high quality detections and improve investigation experience as well.

Even before adding the Automated investigations technology (originally acquired from Hexadite), we found that Microsoft Defender ATP’s Endpoint Detection and Response (EDR) solution increased SOC’s efficiency to the point where Investigation teams analysts can start doing more proactive hunting part-time (often by sifting through lower priority alerts from Microsoft Defender ATP).

Lesson learned: Enable rapid end-to-end workflow for common Email-Endpoint identity attacks.

Ensure your technology investments optimize the analyst workflow to detect, investigate, and remediate common attacks. The Microsoft Defender ATP and connected tools (Office 365 ATP, Azure ATP) was a game changer in our SOC and enabled us to consistently remediate these attacks within minutes. This is our number one recommendation to SOCs as it helped with:

  • Commodity attacks—Efficiently dispatch (a high volume of) commodity attacks in the environment.
  • Targeted attacks—Mitigate impact advanced attacks by severely limiting attack operator time to laterally traverse and explore, hide, set up command/control (C2), etc.

3. Mature case management—To further improve analyst workflow challenges, we transitioned the analyst’s primary queue to our case management service hosted by a commercial SaaS provider. This further reduced our dependency on our legacy SIEM (primarily hosting legacy static analytics that had been refined over time).

Lesson learned: Single queue

Regardless of the size and tooling of your SOC, it’s important to have a single queue and govern quality of it.

This can be implemented as a case management solution, the alert queue in a SIEM, or as simple as the alert list in the Microsoft Threat Protection tool for smaller organizations. Having a single place to go for reactive analysis and ensuring that place produces high quality alerts are key enablers of SOC effectiveness and responsiveness. As a complement to the quality piece, you should also have a proactive hunting activity to ensure that attacker activities are not lost in high noise detection.

Phase 3—Modernize SIEM to cloud native

Our current focus is the transition of the remaining SIEM functions from our legacy capability to Azure Sentinel.

We’re now focused on refining our tool strategy and architecture into a model designed to optimize both breadth (unified view of all events) and depth capabilities. The specialized high-quality tooling (depth tooling) works great for monitoring the “front door” and some hunting but isn’t the only tooling we need.

We’re now in the early stages of operating Microsoft’s Azure Sentinel technology in our SOC to completely replace our legacy on-premises SIEM. This task is a bit simpler for us than most, as we have years of experience using the underlying event log analysis technology that powers Azure Sentinel (Azure Monitor technology, which was previously known as Azure Log Analytics and Operations Management Suite (OMS)).

Our SOC analysts have also been contributing heavily to Azure Sentinel and its community (queries, dashboards, etc.) to share what we have learned about adversaries with our customers.

Learn more details about this SOC and download slides from the CISO Workshop:

Lesson learned: Side-by-side transition state

Based on our experience and conversations with customers, we expect transitioning to cloud analytics like Azure Sentinel will often include a side-by-side configuration with an existing legacy SIEM. This could include a:

  • Short-term transition state—For organizations that are committed to rapidly retiring a legacy SIEM in favor of Azure Sentinel (often to reduce cost/complexity) and need operational continuity during this short bridge period.
  • Medium-term coexistence—For organizations with significant investment into an on-premises SIEM and/or a longer-term plan for cloud migration. These organization recognize the power of Data Gravity—placing analytics closer to the cloud data will avoid costs and challenges of transferring logs to/from the cloud.

Managing the SOC investigations across the SIEM platforms can be accomplished with reasonable efficiency using either a case management tool or the Microsoft Graph Security API (synchronizing Alerts between the two SIEM platforms).

Microsoft is continuing to invest in building more detailed guidance and capabilities to document learnings on this process and continue to refine technology to support it.

Learn more

To learn more, read previous posts in the “Lessons learned from the Microsoft SOC” series, including:

Also, see our full CISO series.

Watch the CISO Spotlight Series: The people behind the cloud.

For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

Stayed tuned for the next segment in “Lessons learned from the Microsoft SOC” where we dive into more of the analyst experience of using these tools to rapidly investigate and remediate attacks. In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post CISO series: Lessons learned from the Microsoft SOC—Part 3a: Choosing SOC tools appeared first on Microsoft Security.

Your password doesn’t matter—but MFA does!

Your pa$$word doesn’t matter—Multi-Factor Authentication (MFA) is the best step you can take to protect your accounts. Using anything beyond passwords significantly increases the costs for attackers, which is why the rate of compromise of accounts using any MFA is less than 0.1 percent of the general population.

All authenticators are vulnerable

There is a broad range of mechanisms to break authenticators. That doesn’t make all authenticators equally vulnerable. Costs vary massively by attack type, and attacks that preserve anonymity and don’t require proximity to the target are much easier to achieve. Channel-Jacking and Real-Time Phishing are the most dominant ways we see non-password authenticators compromised.

Channel independent, verifier impersonation-resistant authenticator types—such as smartcards, Windows Hello, and FIDO—are incredibly hard to crack. Given an overall strong authentication rate of only about 10 percent, doing any form of MFA takes you out of reach of most attacks. Turn on MFA now and start building a long-term authenticator strategy that relies on “phish proof” authenticators, such as Windows Hello and FIDO.

To learn more, read All your creds are belong to us!

The post Your password doesn’t matter—but MFA does! appeared first on Microsoft Security.

Rethinking how we learn security

A couple of years ago, I wrote an article on the relative lack of investor and startup interest in addressing a crucial CISO priority—the preparedness of employees on the security team. Considering what seems to be a steady stream of news about breaches, what can be done to encourage more people to get into cybersecurity and how we can better prepare cyber pros to succeed?

In my own experience, I’ve read white papers and manuals, taken bootcamps and practice tests, and slogged through hours of recorded content. It’s a lot to process, and mostly dependent on the quality of the instructor or delivery format. In this evolving threat environment, content is also outdated as soon as it’s published. Also, training security professionals are focused on certifications, not necessarily practical outcomes.

There’s also an organizational problem: Who in an enterprise owns cyber readiness? HR? A Chief Learning Officer? The CISO? If we’re going to find, hire, and retain tomorrow’s cyber workforce, we must rethink how we reach and prepare people for their careers, so they can continuously learn and stay current on the threats and the tools in front of them. With up to 2 million unfilled cyber roles, this is really a societal challenge.

One innovator that is addressing this is Boulder, Colorado-based Circadence Corporation. I met their CEO, Mike Moniz, at a cyber conference in DC. After one conversation, and upon seeing their “Project Ares®” cyber learning platform, I knew they were on to something. Since then, Circadence and Microsoft have built a very promising partnership to help Circadence scale globally to reach and train more of tomorrow’s cyber workforce. They’re doing this by using Azure infrastructure and platform services; and enjoy the partnership and help.

Circadence focuses on cybersecurity learning and readiness. They build and run immersive, gamified cyber ranges that create a real-time cyber learning environment. In particular is Project Ares, which supports all security proficiency levels of an individual or team—from early career starters to seasoned cyber professionals—for enterprise, government, and academic organizations. Artificial intelligence (AI) powers the delivery of gamified training exercises in battle room and mission virtual machine environments based on actual cyberattack scenarios happening today—such as ransomware, advanced persistent threats, and attacks against industrial control systems.

I signed up for a Circadence account and gave it a shot. I’m not a gamer, but I was really impressed with the UI. Was Circadence actually trying to make learning fun? Project Ares is rooted in proven learning theories and cognitive research. They used resources like Bloom’s Taxonomy of Learning and educational concepts like “reinforcement learning” and “cognitive disfluency” (interrupting the flow of learning with the inclusion of testing, questionnaires, and polls) to match accepted learning concepts with gamified experiences. This isn’t just about making a video game for cyber. And it isn’t just “fun” but informative, educational, practical, and equally innovative without being intimidating.

The learning scenarios are immersive and address varied learning styles, which are two critical design points for maintaining player engagement and lengthening attention span. The platform draws learners across the stages of Bloom’s Taxonomy by:

  • Starting with explanations of techniques, skills, or adversary tactics.
  • Progressing through application of those skills in controlled battle rooms.
  • Arriving at the synthesis of skills and critical thinking to analyze, evaluate, and take actions in an emulated, high-fidelity network against actual malware and emulated threat actors.

Project Ares provides multiple scenarios along a work-role learning path, where you’re required to not only read about cybersecurity, but also must evaluate events in a true network and generate options to achieve objectives. The current catalog contains over 30 cyber games, battle rooms, and missions that provide exposure and experience across many of NIST’s National Initiative on Cybersecurity Education (NICE) work roles in a modern, engaging way.

To learn more about security team training on gamified cyber security ranges in Azure, I sat down with Keenan Skelly, Vice President of Global Partnerships and Security Evangelist. You can watch my interview with Keenan.

This was a great overview of a partner thinking ahead in a creative way to address a major problem in cyber. I encourage anyone interested in improving their own cyber skills, or their team’s skills, to look at gamified learning. Given how younger people interact with IT, it’ll be increasingly important in how we attract them to the industry.

In my next post, I’ll dive deeper into practical learning and defender exercises. In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Rethinking how we learn security appeared first on Microsoft Security.

TLS version enforcement capabilities now available per certificate binding on Windows Server 2019

At Microsoft, we often develop new security features to meet the specific needs of our own products and online services. This is a story about how we solved a very important problem and are sharing the solution with customers. As engineers worldwide work to eliminate their own dependencies on TLS 1.0, they run into the complex challenge of balancing their own security needs with the migration readiness of their customers. Microsoft faced this as well.

To date, we’ve helped customers address these issues by adding TLS 1.2 support to older operating systems, by shipping new logging formats in IIS for detecting weak TLS usage by clients, as well as providing the latest technical guidance for eliminating TLS 1.0 dependencies.

Now Microsoft is pleased to announce a powerful new feature in Windows to make your transition to a TLS 1.2+ world easier. Beginning with KB4490481, Windows Server 2019 now allows you to block weak TLS versions from being used with individual certificates you designate. We call this feature “Disable Legacy TLS” and it effectively enforces a TLS version and cipher suite floor on any certificate you select.

Disable Legacy TLS also allows an online or on-premise web service to offer two distinct groupings of endpoints on the same hardware: one which allows only TLS 1.2+ traffic, and another which accommodates legacy TLS 1.0 traffic. The changes are implemented in HTTP.sys, and in conjunction with the issuance of additional certificates, allow traffic to be routed to the new endpoint with the appropriate TLS version. Prior to this change, deploying such capabilities would require an additional hardware investment because such settings were only configurable system-wide via registry.

For a deep dive on this important new feature and implementation details and scenarios, please see Technical Guidance for Disabling Legacy TLS. Microsoft will also look to make this feature available in its own online services based on customer demand.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 appeared first on Microsoft Security.

How to prevent phishing attacks that target your customers with DMARC and Office 365

You already know that email is the number one attack vector for cybercriminals. But what you might not know is that without a standard email security protocol called Domain Message Authentication, Reporting, and Conformance (DMARC), your organization is open to the phishing attacks that target your customers, crater your email deliverability rates, and crush your email-based revenue streams.

For all the utility of email, which remains the ultimate app for business collaboration and communication, it does have a serious flaw: the ability for a bad actor to pretend to be someone else in an email message. This can be done through one of two attack techniques, spoofing and impersonation. Spoofing is when the sender is attempting to send mail from, or on behalf of, the exact target domain. Impersonation is when the sender if attempting to send mail that is a lookalike, or visually similar, to a targeted domain, targeted user, or targeted brand. When cybercriminals hijack your brand identity, especially your legitimate domains, the phishing attacks they launch against your customers, marketing prospects, and other businesses and consumers can be catastrophic for them—and your business.

Email-based brand spoofing and impersonations surged 250 percent in 2018, with consumers now losing $172 billion to these and other internet scams on an annual basis. More than 90 percent of businesses have been hit by such impersonations, with average losses from successful attacks now standing at $2 million—with an additional $7.9 million in costs when they result in a data breach.

DMARC can help you take control of who can send email messages on your behalf, eliminating the ability for cybercriminals to use your domain to send their illegitimate messages. In addition to blocking fake messages from reaching customers, it helps prevent your business-to-business customers from partner invoice scams like the kind that recently defrauded one large, publicly traded business that lost $45 million. Not a good look for your brand, and a sure way to lose your customers, partners, and brand reputation.

But to protect your corporate domains and prevent executive spoofing of your employees, DMARC must be implemented properly across all your domains and subdomains. And you’ll want your supply chain to do the same to protect your company and partners from such scams. Today, 50 percent of attacks involve “island hopping,” spoofing or impersonating one trusted organization to attack another within the same business ecosystem.

Great, but what exactly is DMARC?

For those not yet familiar with the term, DMARC acts as the policy layer for email authentication technologies already widely in use—including Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM).

At its most essential, DMARC gives organizations control over who is allowed to send emails on their behalf. It allows email receiver systems to recognize when an email is not coming from a specific brand’s approved domains—and gives guidance to the receiver about what to do with those unauthenticated email messages. DMARC with a p=quarantine or p=reject policy is required to block those illegitimate email messages from ever reaching their targets.

Today, 57 percent of consumer email in industries such as healthcare and retail are now fraudulent. Consumer-focused brand impersonations are up 11 times in the last five years, 80 percent involving email. In 2018, the IC3 received 20,373 BEC/E-mail Account Compromise (EAC) complaints with adjusted losses of over $1.2 billion. Those attacks target your accounting, payroll, and HR departments, so your outbound marketing programs can become toxic to recipients, obliterating your outbound email programs and the revenue they generate.

Microsoft support for email authentication and DMARC

As the vast majority of businesses continue to migrate to capable and robust cloud platforms such as Office 365, a new generation of cybercriminal organizations is rapidly innovating its methods to find nefarious new ways to circumvent the considerable security controls built into these platforms. Unfortunately, some organizations may not realize that they should fully implement DMARC to augment the security benefit of Office 365 email authentication.

Microsoft has implemented support for DMARC across all of its email platforms. This means that when someone sends an email to a Microsoft mailbox on a domain that has published a DMARC record with the reject policy, it will only deliver authenticated email to the mailbox, eliminating spoofing of email domains.

If you use Office 365 but aren’t utilizing custom domains, i.e. you use, you don’t need to do anything else to configure or implement DMARC for your organization. But if you have custom domains, or you’re using on-premises Exchange servers, in addition to Office 365, you’ll need to implement DMARC for outbound mail. All of which is straightforward but implementing it across your entire email ecosystem requires some strategy. To ensure your corporate domains are protected, you’ll need to first publish a DMARC record in DNS with a policy of reject. Microsoft uses Agari’s DMARC reporting tool to enhance protection of Microsoft domains from being used in phishing attacks.

Read more about how Microsoft uses Agari to protect its domain and how that is used to validate email in Office 365 in this Microsoft documentation.

The rise of automated, hosted email authentication

The truth is, properly implementing DMARC means you need to identify every single one of your domains and subdomains, across all business units and outside partners—not just the ones you know to send email. That’s because any domain can be spoofed or impersonated, which means every domain should be DMARC-protected to make sure email receiver infrastructures can assess whether incoming messages purporting to come from any of your domains are legit. Brand protection that only covers some domains isn’t really brand protection at all.

The task of identifying and onboarding thousands of domains controlled by multiple business units, outside agencies, and other external partners, both on Office 365 and off, can be daunting. As a result, many organizations may discover that working with a DMARC provider that can fully automate the implementation process across all these parties plus supply channel partners is their best chance for success. This is especially true for those that offer fully hosted email authentication (DMARC, SPF, and DKIM) to simplify the otherwise tedious and time-consuming process involved with preventing brand impersonations—including ones that leverage domain spoofing.

3 steps to get started with DMARC

The good news is that DMARC is supported by 2.5 billion email inboxes worldwide, and more are joining these ranks every day. But unfortunately, even among organizations with DMARC records assigned to their domains, few have them set to p=reject enforcement. As it stands now, nearly 90 percent of Fortune 500 businesses remain unprotected against email-based spoofing attacks, putting their customers, partners, and other businesses at risk for phishing.

When DMARC is implemented using email ecosystem management solutions, organizations have seen phishing emails sent by fraudsters seeking to spoof them drop to near zero. According to Forrester Research, organizations have also seen email conversion rates climb on average 10 percent, leading to an average $4 million boost in revenues thanks to increased email engagement.

While it’s no small task, there are three steps that will help you move forward with DMARC and get started:

  1. Create a new DMARC record with specific policies to protect your organization from spoofing attacks targeting your employees, customers, prospects, and more. Note that the policy must be a p=reject to prevent unauthorized mail from being received.
  2. Download Getting Started with DMARC, a special guide designed to provide an overview of DMARC and best practice resources.
  3. Request a free trial to Office 365 and see how Agari can help implement DMARC at your organization. As a member of the Microsoft Intelligent Security Association (MISA), and provider of DMARC implementation for more domains than any other provider, Agari offers a free trial to Office 365 users looking to protect their customers, employees, and partners from phishing-based brand spoofing attacks. Given the threat from impersonation scams, and the benefits that come from employing the right approaches to reducing it, don’t be surprised if DMARC-based email authentication jumps to the top of the to-do list for a growing number of businesses. With luck, brand imposters will never know what hit them.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to prevent phishing attacks that target your customers with DMARC and Office 365 appeared first on Microsoft Security.

Top 5 use cases to help you make the most of your Cloud Access Security Broker

The number of apps and the flexibility for users to access them from anywhere continues to increase. This presents a challenge for IT departments in ensuring secure access and protecting the flow of critical data with a consistent set of controls.

Cloud Access Security Brokers (CASBs) are a new generation of security solutions that are essential to any modern security strategy. CASBs provide a centralized experience that allows you to apply a standardized set of controls to the apps in your organization. The term Cloud Access Security Broker was first introduced by analyst firm Gartner and has since been one of the fastest growing security categories and is considered one of the top 10 security projects for companies to implement by 2020.

Microsoft Cloud App Security is a CASB that allows you to protect all apps in your organization, including third-party apps across cloud, on-premises, and custom applications. Powered by native integrations with Microsoft’s broader product ecosystem, Cloud App Security delivers state-of-the-art security for multi-cloud environments.

Due to the fast pace of the market, the capability set of CASBs continues to grow, making it increasingly challenging for customers to decide how to get started.

Today, we explore five of the top 20 use cases for CASBs we identified as giving you an immediate return on your investment with very little deployment effort needed before moving on to more advanced scenarios.

Use case #1: Discover all cloud apps and resources used in your organization

No matter where you are in your cloud journey, many of your users likely started leveraging cloud services a long time ago and have stored corporate data in various cloud applications.

A CASB provides you with full visibility over all data stored in sanctioned and connected cloud apps. It gives you deep insights about each file, allowing you to identify if it contains sensitive information, the owner and storage location, as well as the access level of the file. Access levels distinguish between private, internal, externally shared, and publicly shared files, allowing you to quickly identify potentially overexposed files putting sensitive information at risk.

Cloud App Security gives you multiple options to get started with Cloud Discovery. You can leverage firewall logs, an existing Secure Web Gateway, or the unique, single-click enablement via Microsoft Defender Advanced Threat Protection (ATP).

To learn how to get started with app discovery, read Discover and manage shadow IT in your network.

Use case #2: Identify and revoke access to risky OAuth apps

In recent years, OAuth apps have become a popular attack vector for adversaries. Hacker groups such as Fancy Bear have leveraged OAuth apps to trick users into authorizing the use of their corporate credentials, for example by duplicating the UI of a seemingly trustworthy platform.

A CASB enables you to closely monitor which OAuth apps are being authorized against your corporate environment and either manually review them or create policies that automatically revoke access if certain risky criteria are met. Key threat indicators are the combination of an app that has requested a high level of permissions, while having a low community use status, indicating that it’s not commonly found in other organizations and therefore more unlikely to be trustworthy.

Once you’ve enabled app discovery, all you need to do is connect the relevant apps like Office 365, Salesforce, or G-Suite to the service. You’re then alerted when new risky OAuth apps are authorized, so you can start managing them.

To learn more about how to get started with app discovery, read Manage OAuth apps.

Use case #3: Identify compromised user accounts

Identity attacks have increased by more than 300 percent over the past year, making them a key source of compromise and the number one threat vector for organizations.

A CASB learns the behavior of users and other entities in an organization and builds a behavioral profile around them. If an account is compromised and executes activities that differ from the baseline user profile, abnormal behavior detections are raised.

Using built-in and custom anomaly detections, IT is alerted on activities, such as impossible travel, as well as activities from infrequent countries, or the implementation of inbox forwarding rules where emails are automatically forwarded to external email addresses. These alerts allow you to act quickly and quarantine a user account to prevent damage to your organization. All you have to do is connect the relevant apps to Cloud App Security and activate our built-in threat detection policies.

To learn how to get started, read Monitor alerts in Cloud App Security.

Use case #4: Enforce DLP policies for sensitive data stored in your cloud apps

Cloud services such as Office 365 or Slack are key productivity solutions in many organizations today. Consequently, sensitive corporate data is uploaded and shared across them.

For existing data, a CASB solution can help you identify files that contain sensitive information and it provides several remediation options, including removing external sharing permissions, encrypting the file, placing it in admin quarantine, or deleting it if necessary.

Additionally, you can enforce data loss prevention (DLP) policies that scan every file as soon as it’s uploaded to a cloud app, to alert on policy violations and automatically apply data labels and relevant restrictions to protect your information. These policies can be created using advanced techniques such as data identities, regular expressions, OCR, and exact data matching.

To learn how to get started with a centralized DLP strategy across your key apps, read File policies.

Use case #5: Enforce adaptive session controls to manage user actions in real-time

In a cloud-first world, identity has become the new perimeter—protecting access to all your corporate resources at the front door.

Cloud App Security leverages Azure Active Directory (Azure AD) Conditional Access policies to determine a user’s session risk upon sign-in. Based on the risk level associated with a user session, you can enforce adaptive in-session controls that determine which actions a user can carry out and which may be limited or blocked entirely. This seamless identity-based experience ensures the upkeep of productivity, while preventing potentially risky user actions in real-time. The adaptive controls include the prevention of data exfiltration by blocking actions such as download, copy, cut, or print, as well as the prevention of malicious data infiltration to your cloud apps by preventing malicious uploads or pasting text.

You can apply a standardized set of controls to any app in your organizations, whether it’s a cloud app, on-premises app, or a custom application, giving you a consistent set of controls to protect your most sensitive information.

To get started with our built-in templates for inline controls, read Deploy Conditional Access App Control for featured apps.

Starting a CASB project can be daunting given the breadth of capabilities and possibilities of configuration. The five use cases outlined above, and the focus on simple deployment and optimization of UI in Cloud App Security, will ensure that you can make the most of your investment and get started quickly. For more use cases, download our Top 20 CASB use cases e-book.

Learn more and provide feedback

As always, we want to hear from you! If you have any suggestions, questions, or comments, please visit us on our TechCommunity page.

The post Top 5 use cases to help you make the most of your Cloud Access Security Broker appeared first on Microsoft Security.

Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available

Machine learning enhanced with artificial intelligence (AI) holds great promise in addressing many of the global cyber challenges we see today. They give our cyber defenders the ability to identify, detect, and block malware, almost instantaneously. And together they give security admins the ability to deconflict tasks, separating the signal from the noise, allowing them to prioritize the most critical tasks. It is why today, I’m pleased to announce that Azure Sentinel, a cloud-native SIEM that provides intelligent security analytics at cloud scale for enterprises of all sizes and workloads, is now generally available.

Our goal has remained the same since we first launched Microsoft Azure Sentinel in February: empower security operations teams to help enhance the security posture of our customers. Traditional Security Information and Event Management (SIEM) solutions have not kept pace with the digital changes. I commonly hear from customers that they’re spending more time with deployment and maintenance of SIEM solutions, which leaves them unable to properly handle the volume of data or the agility of adversaries.

Recent research tells us that 70 percent of organizations continue to anchor their security analytics and operations with SIEM systems,1 and 82 percent are committed to moving large volumes of applications and workloads to the public cloud.2 Security analytics and operations technologies must lean in and help security analysts deal with the complexity, pace, and scale of their responsibilities. To accomplish this, 65 percent of organizations are leveraging new technologies for process automation/orchestration, while 51 percent are adopting security analytics tools featuring machine learning algorithms.3 This is exactly why we developed Azure Sentinel—an SIEM re-invented in the cloud to address the modern challenges of security analytics.

Learning together

When we kicked off the public preview for Azure Sentinel, we were excited to learn and gain insight into the unique ways Azure Sentinel was helping organizations and defenders on a daily basis. We worked with our partners all along the way; listening, learning, and fine-tuning as we went. With feedback from 12,000 customers and more than two petabytes of data analysis, we were able to examine and dive deep into a large, complex, and diverse set of data. All of which had one thing in common: a need to empower their defenders to be more nimble and efficient when it comes to cybersecurity.

Our work with RapidDeploy offers one compelling example of how Azure Sentinel is accomplishing this complex task. RapidDeploy creates cloud-based dispatch systems that help first responders act quickly to protect the public. There’s a lot at stake, and the company’s cloud-native platform must be secure against an array of serious cyberthreats. So when RapidDeploy implemented a SIEM system, it chose Azure Sentinel, one of the world’s first cloud-native SIEMs.

Microsoft recently sat down with Alex Kreilein, Chief Information Security Officer at RapidDeploy. Here’s what he shared: “We build a platform that helps save lives. It does that by reducing incident response times and improving first responder safety by increasing their situational awareness.”

Now RapidDeploy uses the complete visibility, automated responses, fast deployment, and low total cost of ownership in Azure Sentinel to help it safeguard public safety systems. “With many SIEMs, deployment can take months,” says Kreilein. “Deploying Azure Sentinel took us minutes—we just clicked the deployment button and we were done.”

Learn even more about our work with RapidDeploy by checking out the full story.

Another great example of a company finding results with Azure Sentinel is ASOS. As one of the world’s largest online fashion retailers, ASOS knows they’re a prime target for cybercrime. The company has a large security function spread across five teams and two sites—but in the past, it was difficult for ASOS to gain a comprehensive view of cyberthreat activity. Now, using Azure Sentinel, ASOS has created a bird’s-eye view of everything it needs to spot threats early, allowing it to proactively safeguard its business and its customers. And as a result, it has cut issue resolution times in half.

“There are a lot of threats out there,” says Stuart Gregg, Cyber Security Operations Lead at ASOS. “You’ve got insider threats, account compromise, threats to our website and customer data, even physical security threats. We’re constantly trying to defend ourselves and be more proactive in everything we do.”

Already using a range of Azure services, ASOS identified Azure Sentinel as a platform that could help it quickly and easily unite its data. This includes security data from Azure Security Center and Azure Active Directory (Azure AD), along with data from Microsoft 365. The result is a comprehensive view of its entire threat landscape.

“We found Azure Sentinel easy to set up, and now we don’t have to move data across separate systems,” says Gregg. “We can literally click a few buttons and all our security solutions feed data into Azure Sentinel.”

Learn more about how ASOS has benefitted from Azure Sentinel.

RapidDeploy and ASOS are just two examples of how Azure Sentinel is helping businesses process data and telemetry into actionable security alerts for investigation and response. We have an active GitHub community of preview participants, partners, and even Microsoft’s own security experts who are sharing new connectors, detections, hunting queries, and automation playbooks.

With these design partners, we’ve continued our innovation in Azure Sentinel. It starts from the ability to connect to any data source, whether in Azure or on-premises or even other clouds. We continue to add new connectors to different sources and more machine learning-based detections. Azure Sentinel will also integrate with Azure Lighthouse service, which will enable service providers and enterprise customers with the ability to view Azure Sentinel instances across different tenants in Azure.

Secure your organization

Now that Azure Sentinel has moved out of public preview and is generally available, there’s never been a better time to see how it can help your business. Traditional on-premises SIEMs require a combination of infrastructure costs and software costs, all paired with annual commitments or inflexible contracts. We are removing those pain points, since Azure Sentinel is a cost-effective, cloud-native SIEM with predictable billing and flexible commitments.

Infrastructure costs are reduced since you automatically scale resources as you need, and you only pay for what you use. Or you can save up to 60 percent compared to pay-as-you-go pricing by taking advantage of capacity reservation tiers. You receive predictable monthly bills and the flexibility to change capacity tier commitments every 31 days. On top of that, bringing in data from Office 365 audit logs, Azure activity logs and alerts from Microsoft Threat Protection solutions doesn’t require any additional payments.

Please join me for the Azure Security Expert Series where we will focus on Azure Sentinel on Thursday, September 26, 2019, 10–11 AM Pacific Time. You’ll learn more about these innovations and see real use cases on how Azure Sentinel helped detect previously undiscovered threats. We’ll also discuss how Accenture and RapidDeploy are using Azure Sentinel to empower their security operations team.

Get started today with Azure Sentinel!

1 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
2 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
3 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019

The post Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available appeared first on Microsoft Security.

Microsoft is awarded Zscaler’s Technology Partner of the Year for 2019

Last week at Zscaler’s user conference, Zenith Live, Microsoft received Zscaler’s Technology Partner of the Year Award in the Impact category. The award was given to Microsoft for the depth and breadth of integrations we’ve collaborated with Zscaler on and the positive feedback received from customers about these integrations.

Together with Zscaler—a Microsoft Intelligent Security Association (MISA) member—we’re focused on providing our joint customers with secure, fast access to the cloud for every user. Since partnering with Zscaler, we’ve delivered several integrations that help our customers better secure their environments, including:

  • Azure Active Directory (Azure AD) integration to extend conditional access policies to Zscaler applications to validate user access to cloud-based applications. We also announced support for user provisioning of Zscaler applications to enable automated, policy-based provisioning and deprovisioning of user accounts with Azure AD.
  • Microsoft Intune integration that allows IT administrators to provision Zscaler applications to specific Azure AD users or groups within the Intune console and configure connections by using the existing Intune VPN profile workflow.
  • Microsoft Cloud App Security integration to discover and manage access to Shadow IT in an organization. Zscaler can be leveraged to send traffic data to Microsoft’s Cloud Access Security Broker (CASB) to assess cloud services against risk and compliance requirements before making access control decisions for the discovered cloud apps.

“We’re excited to see customers use Zscaler and Microsoft solutions together to deliver fast, secure, and direct access to the applications they need. The Technology Partner of the Year Award is a testament of Microsoft’s commitment to helping customers better secure their environments.”
—Punit Minocha, Vice President of Business Development at Zscaler

“The close collaboration between our teams and deep integration across Zscaler and Microsoft solutions help our joint customers be more secure and ensure their users stay productive. We’re pleased to partner with Zscaler and honored to be named Zscaler’s Technology Partner of the Year.”
—Alex Simons, Corporate Vice President of Program Management at Microsoft

We’re thrilled to be Zscaler’s Technology Partner of the Year in the Impact category and look forward to our continued partnership and what Zscaler.

The post Microsoft is awarded Zscaler’s Technology Partner of the Year for 2019 appeared first on Microsoft Security.

Overview of the Marsh-Microsoft 2019 Global Cyber Risk Perception survey results

Technology is dramatically transforming the global business environment, with continual advances in areas ranging from artificial intelligence (AI) and the Internet of Things (IoT) to data availability and blockchain. The speed at which digital technologies evolve and disrupt traditional business models keeps increasing. At the same time, cyber risks seem to evolve even faster—moving beyond data breaches and privacy concerns to sophisticated schemes that can disrupt entire businesses, industries, supply chains, and nations—costing the economy billions of dollars and affecting companies in every sector.

The hard truth organizations must face is that cyber risk can be mitigated and managed—but it cannot be eliminated. Results from the 2019 Marsh-Microsoft Global Cyber Risk Perception survey reveal several encouraging signs of improvement in the way that organizations view and manage cyber risk. Now that cyber risk is clearly and firmly at the top of corporate risk agendas, we see a positive shift towards the adoption of more rigorous, comprehensive cyber risk management in many areas. However, many organizations still struggle with how to best articulate, approach, and act upon cyber risk within their overall enterprise risk framework—even as the tide of technological change brings new and unanticipated cyber risk complexity.

Highlights from the survey

While companies see cyber events as a top priority, confidence in cyber resilience is declining. Cyber risk became even more firmly entrenched as an organizational priority in the past two years. Yet at the same time, organizations’ confidence in their ability to manage the risk declined.

  • 79 percent of respondents ranked cyber risk as a top five concern for their organization, up from 62 percent in 2017.
  • Confidence declined in each of three critical areas of cyber resilience. Those saying they had “no confidence” increased from:
    • 9 percent to 18 percent for understanding and assessing cyber risks.
    • 12 percent to 19 percent for preventing cyber threats.
    • 15 percent to 22 for responding to and recovering from cyber events.

New technology brings increased cyber exposure

Technology innovation is vital to most businesses, but often adds to the complexity of an organization’s technology footprint, including its cyber risk.

  • 77 percent of the 2019 respondents cited at least one innovative operational technology they adopted or are considering.
  • 50 percent said cyber risk is almost never a barrier to the adoption of new technology, but 23 percent—including many smaller firms—said that for most new technologies, the risk outweighs potential business benefits.
  • 74 percent evaluate technology risks prior to adoption, but just 5 percent said they evaluate risk throughout the technology lifecycle—and 11 percent do not perform any evaluation.

Increasing interdependent digital supply chains brings new cyber risks

The increasing interdependence and digitization of supply chains brings increased cyber risk to all parties, but many firms perceive the risks as one-sided.

  • There was a discrepancy in many organizations’ view of the cyber risk they face from supply chain partners, compared to the level of risk their organization poses to counterparties.
  • 39 percent said the cyber risk posed by their supply chain partners and vendors to their organization was high or somewhat high.
  • Only 16 percent said the cyber risk they themselves pose to their supply chain was high or somewhat high.
  • Respondents were more likely to set a higher bar for their own organization’s cyber risk management actions than they do for their suppliers.

Appetite for government role in managing cyber risks draws mixed views

Organizations generally see government regulation and industry standards as having limited effectiveness in helping manage cyber risk—with the notable exception of nation-state attacks.

  • 28 percent of businesses regard government regulations or laws as being very effective in improving cybersecurity.
  • 37 percent of businesses regard soft industry standards as being very effective in improving cybersecurity.
  • A key area of difference relates to cyberattacks by nation-state actors:
    • 54 percent of respondents said they are highly concerned about nation-state cyberattacks.
    • 55 percent said government needs to do more to protect organizations against nation-state cyberattacks.

Cyber investments focus on prevention, not resilience

Many organizations focus on technology defenses and investments to prevent cyber risk, to the neglect of assessment, risk transfer, response planning, and other risk management areas that build cyber resilience.

  • 88 percent said information technology/information security (IT/InfoSec) is one of the three main owners of cyber risk management, followed by executive leadership/board (65 percent) and risk management (49 percent).
  • Only 17 percent of executives say they spent more than a few days on cyber risk over the past year.
  • 64 percent said a cyberattack on their organization would be the biggest driver of increased cyber risk spending.
  • 30 percent of organizations reported using quantitative methods to express cyber risk exposures, up from 17 percent in 2017.
  • 83 percent have strengthened computer and system security over the past two years, but less than 30 percent have conducted management training or modeled cyber loss scenarios.

Cyber insurance

Cyber insurance coverage is expanding to meet evolving threats, and attitudes toward policies are also shifting.

  • 47 percent of organizations said they have cyber insurance, up from 34 percent in 2017.
  • Larger firms were more likely to have cyber insurance—57 percent of those with annual revenues above $1 billion had a policy, compared to 36 percent of those with revenue under $100 million.
  • Uncertainty about whether available cyber insurance could meet their firm’s needs dropped to 31 percent, down from 44 percent in 2017.
  • 89 percent of those with cyber insurance were highly confident or fairly confident their policies would cover the cost of a cyber event.

Key takeaways

At a practical level, this year’s survey points to a number of best practices that the most cyber resilient firms employ and which all firms should consider adopting:

  • Create a strong organizational cybersecurity culture with clear, shared standards for governance, accountability, resources, and actions.
  • Quantify cyber risk to drive better informed capital allocation decisions, enable performance measurement, and frame cyber risk in the same economic terms as other enterprise risks.
  • Evaluate the cyber risk implications of a new technology as a continual and forward-looking process throughout the lifecycle of the technology.
  • Manage supply chain risk as a collective issue, recognizing the need for trust and shared security standards across the entire network, including the organization’s cyber impact on its partners.
  • Pursue and support public-private partnerships around critical cyber risk issues that can deliver stronger protections and baseline best practice standards for all.

Despite the decline in organizational confidence in the ability to manage cyber risk, we’re optimistic that more organizations are now clearly recognizing the critical nature of the threat and beginning to seek out and embrace best practices.

Effective cyber risk management requires a comprehensive approach employing risk assessment, measurement, mitigation, transfer, and planning, and the optimal program will depend on each company’s unique risk profile and tolerance.

Still, these recommendations address many of the common and most urgent aspects of cyber risk that organizations today are challenged with; as such, they should be viewed as signposts along the path to building true cyber resilience.

Learn more

Read the full 2019 Marsh-Microsoft Global Cyber Risk Perception survey or find additional report content on Marsh’s website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Overview of the Marsh-Microsoft 2019 Global Cyber Risk Perception survey results appeared first on Microsoft Security.

Operational resilience begins with your commitment to and investment in cyber resilience

Operational resilience cannot be achieved without a true commitment to and investment in cyber resilience. Global organizations need to reach the state where their core operations and services won’t be disrupted by geopolitical or socioeconomic events, natural disasters, and cyber events if they are to weather such events.

To help increase stability and lessen the impact to their citizens, an increasing number of government entities have drafted regulations requiring the largest organizations to achieve a true state of operational resilience: where both individual organizations and their industry absorb and adapt to shocks, rather than contributing to them. There are many phenomena that have led to this increased governance, including high-profile cyberattacks like NotPetya, WannaCrypt, and the proliferation of ransomware.

The rise in nation state and cybercrime attacks focusing on critical infrastructure and financial sectors, and the rapid growth of tech innovation pervading more and more industries, join an alarming increase in severe natural disasters, an unstable global geopolitical environment, and global financial market instability on the list of threats organizations should prepare for.

Potential impact of cybercrime attacks

Taken individually, any of these events can cripple critical business and government operations. A lightning strike this summer caused the UK’s National Grid to suffer the biggest blackout in decades. It affected homes across the country, shut down traffic signals, and closed some of the busiest train stations in the middle of the Friday evening rush hour. With trains needing to be manually rebooted, the rhythm of everyday work life was disrupted. The impact of cybercrime attacks can be as significant, and often longer term.

NotPetya cost businesses more than $10 billion; pharmaceutical giant Merck put its bill at $870 million alone. For more than a week, the malware shut down cranes and security gates at Maersk shipping terminals, as well as most of the company’s IT network—from the booking site to systems handling cargo manifests. It took two months to rebuild all the software systems, and three months before all cargo in transit was tracked down—with recovery dependent on a single server having been accidently offline during the attack due to the power being cut off.

The combination of all these threats will cause disruption to businesses and government services on a scale that hasn’t been seen before. Cyber events will also undermine the ability to respond to other types of events, so they need to be treated holistically as part of planning and response.

Extending operational resiliency to cover your cybersecurity program should not mean applying different principles to attacks, outages, and third-party failures than you would to physical attacks and natural hazards. In all cases, the emphasis is on having plans in place to deliver essential services whatever the cause of the disruption. Organizations are responding by rushing to purchase cyber-insurance policies and increasing their spending on cybersecurity. I encourage them to take a step back and have a critical understanding of what those policies actually cover, and to target the investment, so the approach supports operational resilience.

As we continue to witness an unparalleled increase in cyber-related attacks, we should take note that a large majority of the attacks have many factors in common. At Microsoft, we’ve written at length on the controls that best position an organization to defend against and respond to a cyber event.

We must not stand still

The adversary is innovating and accelerating. We must continue to be vigilant and thorough in both security posture, which must be based on “defense in depth,” and in sophistication of response.

The cost of data breaches continues to rise; the global average cost of a data breach is $3.92 million according to the 2019 Ponemon Institute report. This is up 1.5 percent from 2018 and 12 percent higher than in 2014. These continually rising costs have helped galvanize global entities around the topic of operational resilience.

The Bank of England, in July 2018, published comprehensive guidelines on operational resilience that set a robust standard for rigorous controls across all key areas: technology, legal, communications, financial solvency, business continuity, redundancy, failover, governmental, and customer impact, as well as full understanding of what systems and processes underlie your business products and services.

This paper leaves very few stones unturned and includes a clear statement of my thesis—dealing with cyber risk is an important element of operational resilience and you cannot achieve operational resilience without achieving cyber resilience.

Imagine for a moment that your entire network, including all your backups, is impacted by a cyberattack, and you cannot complete even a single customer banking transaction. That’s only one target; it’s not hard to extrapolate from here to attacks that shut down stock trades, real estate transactions, fund transfers, even to attacks on critical infrastructure like healthcare, energy, water system operators. In the event of a major attack, all these essential services will be unavailable until IT systems are restored to at least a baseline of operations.

It doesn’t require professional cybersecurity expertise to understand the impact of shutting down critical services, which is why the new paradigm for cybersecurity must begin not with regulations but with a program to build cyber resilience. The long list of public, wide-reaching cyberattacks where the companies were compliant with required regulations, but still were breached, demonstrates why we can no longer afford to use regulatory requirements as the ultimate driver of cybersecurity.

While it will always be necessary to be fully compliant with regulations like GDPR, SOX, HIPAA, MAS, regional banking regulators, and any others that might be relevant to your industry, it simply isn’t sufficient for a mature cyber program to use this compliance as the only standard. Organizations must build a program that incorporates defense in depth and implements fundamental security controls like MFA, encryption, network segmentation, patching, and isolation and reduction of exceptions. We also must consider how our operations will continue after a catastrophic cyberattack and build systems that can both withstand attack and be instantaneously resilient even during such an attack. The Bank of England uses the mnemonic WAR: for withstand, absorb, recover.

The ability to do something as simple as restoring from recent backups will be tested in every ransomware attack, and many organizations will fail this test—not because they are not backing up their systems, but because they haven’t tested the quality of their backup procedures or practiced for a cyber event. Training is not enough. Operational resilience guidelines call for demonstrating that you have concrete measures in place to deliver resilient services and that both incident management and contingency plans have been tested. You’ll need to invest in scenario planning, tabletop exercises and red/blue team exercises that prove the rigor of your threat modeling and give practice in recovering from catastrophic cyber events.

Importance of a cyber recovery plan

Imagine, if you will, how negligent it would be for your organization to never plan and prepare for a natural disaster. A cyber event is the equivalent: the same physical, legal, operational, technological, human, and communication standards must apply to preparation, response, and recovery. We should all consider it negligence if we do not have a cyber recovery plan in place. Yet, while the majority of firms have a disaster recovery plan on paper, nearly a quarter never test that and only 42 percent of global executives are confident their organization could recover from a major cyber event without it affecting their business.

Cybersecurity often focuses on defending against specific threats and vulnerabilities to mitigate cyber risk, but cyber resilience requires a more strategic and holistic view of what could go wrong and how your organization will address it as whole. The cyber events you’ll face are real threats, and preparing for them must be treated like any other form of continuity and disaster recovery. The challenges to building operational resilience have become more intense in an increasingly hostile cyber environment, and this preparation is a topic we will continue to address.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Operational resilience begins with your commitment to and investment in cyber resilience appeared first on Microsoft Security.

Are students prepared for real-world cyber curveballs?

With a projected “skills gap” numbering in the millions for open cyber headcount, educating a diverse workforce is critical to corporate and national cyber defense moving forward. However, are today’s students getting the preparation they need to do the cybersecurity work of tomorrow?

To help educators prepare meaningful curricula, the National Institute of Standards and Technology (NIST) has developed the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework. The U.S. Department of Energy (DOE) is also doing its part to help educate our future cybersecurity workforce through initiatives like the CyberForce Competition,™ designed to support hands-on cyber education for college students and professionals. The CyberForce Competition™ emulates real-world, critical infrastructure scenarios, including “cyber-physical infrastructure and lifelike anomalies and constraints.”

As anyone who’s worked in cybersecurity knows, a big part of operational reality are the unexpected curveballs ranging from an attacker’s pivot while escalating privileges through a corporate domain to a request from the CEO to provide talking points for an upcoming news interview regarding a recent breach. In many “capture the flag” and “cyber-range exercises,” these unexpected anomalies are referred to as “injects,” the curveballs of the training world.

For the CyberForce Competition™ anomalies are mapped across the seven NICE Framework Workforce Categories illustrated below:

Image showing seven categories of cybersecurity: Operate and Maintain, Oversee and Govern, Collect and Operate, Securely Provision, Analayze, Protect and Defend, and Investigate.

NICE Framework Workforce categories, NIST SP 800-181.

Students were assessed based on how many and what types of anomalies they responded to and how effective/successful their responses were.

Tasks where students excelled

  • Threat tactic identification—Students excelled in identifying threat tactics and corresponding methodologies. This was shown through an anomaly that required students to parse through and analyze a log file to identify aspects of various identifiers of insider threat; for example, too many sign-ins at one time, odd sign-in times, or sign-ins from non-standard locations.
  • Log file analysis and review—One task requires students to identify non-standard browsing behavior of agents behind a firewall. To accomplish this task, students had to write code to parse and analyze the log files of a fictitious company’s intranet web servers. Statistical evidence from the event indicates that students are comfortable writing code to parse log file data and performing data analysis.
  • Insider threat investigations—Students seemed to gravitate towards the anomalies and tasks connected to insider threat identification that maps to the Security Provision pillar. Using log analysis techniques described above, students were able to determine at a high rate of success individuals with higher than average sign-in failure rates and those with anomalous successful logins, such as from many different devices or locations.
  • Network forensics—The data indicated that overall the students had success with the network packet capture (PCAP) forensics via analysis of network traffic full packet capture streams. They also had a firm grasp on related tasks, including file system forensic analysis and data carving techniques.
  • Trivia—Students were not only comfortable with writing code and parsing data, but also showed they have solid comprehension and intelligence related to cybersecurity history and trivia. Success in this category ranked in the higher percentile of the overall competition.

Pillar areas for improvement

  • Collect and Operate—This pillar “provides specialized denial and deception operations and collection of cybersecurity information that may be used to develop intelligence.” Statistical analysis gathered during the competition indicated that students had hesitancies towards the activities in this pillar, including for some tasks that they were successful with in other exercises. For example, some fairly simple tasks, such as analyzing logs for specific numbers of entries and records on a certain date, had a zero percent completion rate. Reasons for non-completion could be technical inability on the part of the students but could also have been due to a poorly written anomaly/task or even an issue with sign-ins to certain lab equipment.
  • Investigate—Based on the data, the Investigate pillar posed some challenges for the students. Students had a zero percent success rate on image analysis and an almost zero percent success rate on malware analysis. In addition, students had a zero percent success rate in this pillar for finding and identifying a bad file in the system.

Key takeaways

Frameworks like NIST NICE and competitions like the DOE CyberForce Competition™ are helping to train up the next generation of cybersecurity defenders. Analysis from the most recent CyberForce Competition™ indicates that students are comfortable with tasks in the “Protect and Defend” pillar and are proficient in many critical tasks, including network forensics and log analysis. The data points to areas for improvement especially in the “Collect and Operate” and “Investigate” pillars, and for additional focus on forensic skills and policy knowledge.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The CyberForce work was partially supported by the U.S. Department of Energy Office of Science under contract DE-AC02-06CH11357.

The post Are students prepared for real-world cyber curveballs? appeared first on Microsoft Security.

Foundations of Flow—secure and compliant automation, part 2

In part 1 of this series, we introduced you to Microsoft Flow, a powerful automation service already being used by many organizations across the world. Flow is designed to empower citizen developers while featuring capabilities sought for by professional developers. Flow is also a foundational element of the Microsoft Power Platform announced earlier this year.

More organizations are seeking automation solutions and there will be many options. As security professionals, you’ll have to recommend the service offering all the benefits of automation, while ensuring the organization remains secure and compliant. Flow is natively integrated with best-in-class authentication services, offers powerful data loss prevention and an enhanced IT experience ranging from broad visibility and control to automating IT functions, and is built on rigorous privacy and compliance standards. We’re confident that Flow will be the right choice for your organization, so let’s get started on showing you why.

Prioritized security for your users and data

Flow is seamlessly integrated with Azure Active Directory (Azure AD), one of the world’s most sophisticated, comprehensive, and secure identity and access management services. Azure AD helps secure the citizen developer by protecting against identity compromise, gives the IT admin/pro visibility and control, and offers additional security capabilities for the pro developer. Azure AD helps support the least privilege strategy, which we recommend for Flow users. Azure AD also follows a federated model, so organizations not directly using the service are still secure. Since authentication to Flow is via Azure AD, admins using its premium features can create conditional access policies which restrict user access to only the apps and data relevant for their role. Flow’s integration with Azure AD also enhances security for more experienced developers who can register applications with the service and leverage multiple authentication protocols, including the OAuth2 authorization framework to enable their code to access platform APIs (Figure 1). This access protection can also be extended to external users.

Screenshot of an authentication type being selected for a connector in Microsoft Flow.

Figure 1. Choosing authentication framework for custom Flow connector.

To experience the full benefits of automation and unlock the potential of an organization’s data, Flow offers 270+ connectors to services, including third-party services. Some connectors are even built for social media sites, such as Twitter (Figure 2). With so many integrations, there’s always the threat of data leakage or compromise. Imagine the scenario where a user mistakenly tweets sensitive data. To prevent these types of scenarios, Flow is supported by the Microsoft Data Loss Prevention (DLP) service.

Screenshot of the Microsoft Flow dashboard. A search has been conducted for "twitter."

Figure 2. Pre-built Flow templates offering automation between Twitter and several other applications.

Microsoft DLP protects data from being exposed and DLP polices can be easily created by administrators. DLP policies can be customized at the user, environment, or tenant level to ensure security is maintained without impact to productivity. These policies enforce rules of what connectors can be used together by classifying connectors as either “Business Data Only” or “No Business Data Allowed” (Figure 3). A connector can only be used with other connectors within its group. For example, a connector in the Business Data Only group can only be used with other connectors from that group. The default setting for all connectors is No Business Data Allowed.

Importantly, all data used by Flow is also encrypted during transit using HTTPS. As a security leader, you can feel reassured that Flow is designed to ensure your data is secured both at rest and in transit with strict enforcement. To learn more about strategies to create DLP polices for Flow connectors, check out our white paper.

Screenshot of data groups in the Microsoft Flow admin center.

Figure 3. Flow Admin center where you can create DLP policies to protect your sensitive while benefiting from the powerful automation capabilities offered with Flow.

Enhancing management of the IT environment

Flow includes the Flow management connector, which enables admins to automate several IT tasks. The management connecter offers 19 possible actions that can be automated—from creating and deleting Flows to more complex actions, such as modifying the owner of a Flow. The Flow management connector is versatile and can be combined with other connectors to automate several admin tasks, enhancing the efficiency of IT teams. For example, security admins can create a Flow combining the management connector with Azure AD, Microsoft Cloud App Security, Outlook, and Teams to quickly send automatic notifications via email or Teams anytime Cloud App Security generates an alert on suspicious activity (Figure 4). Other use cases could include a notification when a new app is created, automatically updating user permissions based on role changes, or tracking when custom connectors are created in your environment.

Screenshot of the Flow template using the management connecter, Azure AD, Cloud App Security, Outlook, and Teams.

Figure 4. Flow template using the management connecter, Azure AD, Cloud App Security, Outlook, and Teams.

Visibility of activity logs

Many of Flow’s current users are also Office 365 users. As such, Flow event logs are available in the Office 365 Security & Compliance Center. By surfacing activity logs in the Security & Compliance Center, admins gain visibility into which users are creating Flows, if Flows are being shared, as well as which connectors are being used (Figure 5). The activity data is retained for 90 days and can be easily exported in CSV format for further analysis. The event logs surface in the Security & Compliance Center within 90 minutes of the event taking place. Admins also gain insight on which users are using paid versus trial licenses in the Security & Compliance Center.

Screenshot of Microsoft Flow activities accessed through the Office 365 Security & Compliance Center.

Figure 5. Microsoft Flow activities accessed through the Office 365 Security & Compliance Center.

Strict on data privacy and regulatory requirements

Flow adheres to Microsoft’s strict standards of privacy and protection of customer data. These policies prohibit customer data from being mined for marketing or advertising. Microsoft personnel and subcontractors are also restricted from accessing customer data and we carefully define requirements for responding to government requests for customer data. Microsoft also complies with international data protection laws regarding transfers of customer data across borders.

Microsoft Flow is also certified for many global, government, industrial, and regional compliance regulations. You can see the full list of Microsoft certifications, while Table 1 summarizes the certifications specifically covered by Flow.

Global Government Industry Regional
CSA-STAR-Attestation UK G-Cloud HIPAA/HITECH EU-Model-Clauses
CSA-Star-Certification HITRUST
ISO 27018
ISO 9001

Table 1. Flow’s existing certifications.

Let Flow enhance your digital transformation

Let your organization start benefiting from one of the most powerful and secure automation services available on the market. Watch the video and follow the instructions to get started with Flow. Be sure to join the growing Flow community and participate in discussions, provide insights, and even influence product roadmap. Also follow the Flow blog to get news on the latest Flow updates and read our white paper on best practices for deploying Flow in your organization. Be sure to check out part 1, where we provide a quick intro into Flow and dive into its best-in-class, secure infrastructure.

Additional resources

The post Foundations of Flow—secure and compliant automation, part 2 appeared first on Microsoft Security.

Automated incident response in Office 365 ATP now generally available

Security teams responsible for investigating and responding to incidents often deal with a massive number of signals from widely disparate sources. As a result, rapid and efficient incident response continues to be the biggest challenge facing security teams today. The sheer volume of these signals, combined with an ever-growing digital estate of organizations, means that a lot of critical alerts miss getting the timely attention they deserve. Security teams need help to scale better, be more efficient, focus on the right issues, and deal with incidents in a timely manner.

This is why I’m excited to announce the general availability of Automated Incident Response in Office 365 Advanced Threat Protection (ATP). Applying these powerful automation capabilities to investigation and response workflows can dramatically improve the effectiveness and efficiency of your organization’s security teams.

A day in the life of a security analyst

To give you an idea of the complexity that security teams deal with in the absence of automation, consider the following typical workflow that these teams go through when investigating alerts:

Infographic showing these steps: Alert, Analyze, Investigate, Assess impact, Contain, and Respond.

And as they go through this flow for every single alert—potentially hundreds in a week—it can quickly become overwhelming. In addition, the analysis and investigation often require correlating signals across multiple different systems. This can make effective and timely response very difficult and costly. There are just too many alerts to investigate and signals to correlate for today’s lean security teams.

To address these challenges, earlier this year we announced the preview of powerful automation capabilities to help improve the efficiency of security teams significantly. The security playbooks we introduced address some of the most common threats that security teams investigate in their day-to-day jobs and are modeled on their typical workflows.

This story from Ithaca College reflects some of the feedback we received from customers of the preview of these capabilities, including:

“The incident detection and response capabilities we get with Office 365 ATP give us far more coverage than we’ve had before. This is a really big deal for us.”
—Jason Youngers, Director and Information Security Officer, Ithaca College

Two categories of automation now generally available

Today, we’re announcing the general availability of two categories of automation—automatic and manually triggered investigations:

  1. Automatic investigations that are triggered when alerts are raisedAlerts and related playbooks for the following scenarios are now available:
    • User-reported phishing emails—When a user reports what they believe to be a phishing email, an alert is raised triggering an automatic investigation.
    • User clicks a malicious link with changed verdict—An alert is raised when a user clicks a URL, which is wrapped by Office 365 ATP Safe Links, and is determined to be malicious through detonation (change in verdict). Or if the user clicks through the Office 365 ATP Safe Links warning pages an alert is also raised. In both cases, the automated investigation kicks in as soon as the alert is raised.
    • Malware detected post-delivery (Malware Zero-Hour Auto Purge (ZAP))—When Office 365 ATP detects and/or ZAPs an email with malware, an alert triggers an automatic investigation.
    • Phish detected post-delivery (Phish ZAP)—When Office 365 ATP detects and/or ZAPs a phishing email previously delivered to a user’s mailbox, an alert triggers an automatic investigation.
  1. Manually triggered investigations that follow an automated playbook—Security teams can trigger automated investigations from within the Threat Explorer at any time for any email and related content (attachment or URLs).

Rich security playbooks

In each of the above cases, the automation follows rich security playbooks. These playbooks are essentially a series of carefully logged steps to comprehensively investigate an alert and offer a set of recommended actions for containment and mitigation. They correlate similar emails sent or received within the organization and any suspicious activities for relevant users. Flagged activities for users might include mail forwarding, mail delegation, Office 365 Data Loss Prevention (DLP) violations, or suspicious email sending patterns.

In addition, aligned with our Microsoft Threat Protection promise, these playbooks also integrate with signals and detections from Microsoft Cloud App Security and Microsoft Defender ATP. For instance, anomalies detected by Microsoft Cloud App Security are ingested as part of these playbooks. And the playbooks also trigger device investigations with Microsoft Defender ATP (for malware playbooks) where appropriate.

Let’s look at each of these automation scenarios in detail:

User reports a phishing email—This represents one of the most common flows investigated today. The alert is raised when a user reports a phish email using the Report message add-in in Outlook or Outlook on the web and triggers an automatic investigation using the User Reported Message playbook.

Screenshot of a phishing email being investigated.

User clicks on a malicious linkA very common vector used by attackers is to weaponize a link after delivery of an email. With Office 365 ATP Safe Links protection, we can detect such attacks when links are detonated at time-of-click. A user clicking such links and/or overriding the Safe Links warning pages is at risk of compromise. The alert raised when a malicious URL is clicked triggers an automatic investigation using the URL verdict change playbook to correlate any similar emails and any suspicious activities for the relevant users across Office 365.

Image of a clicked URL being assigned as malicious.

Email messages containing malware removed after delivery—One of the critical pillars of protection in Office 365 Exchange Online Protection (EOP) and Office 365 ATP is our capability to ZAP malicious emails. Email messages containing malware removed after delivery alert trigger an investigation into similar emails and related user actions in Office 365 for the period when the emails were present in a user’s inbox. In addition, the playbook also triggers an investigation into the relevant devices for the users by leveraging the native integration with Microsoft Defender ATP.

Screenshot showing malware being zapped.

Email messages containing phish removed after deliveryWith the rise in phishing attack vectors, Office 365 EOP and Office 365 ATP’s ability to ZAP malicious emails detected after delivery is a critical protection feature. The alert raised triggers an investigation into similar emails and related user actions in Office 365 for the period when the emails were present in a user’s inbox and also evaluates if the user clicked any of the links.

Screenshot of a phish URL being zapped.

Automated investigation triggered from within the Threat Explorer—As part of existing hunting or security operations workflows, Security teams can also trigger automated investigations on emails (and related URLs and attachments) from within the Threat Explorer. This provides Security Operations (SecOps) a powerful mechanism to gain insights into any threats and related mitigations or containment recommendations from Office 365.

Screenshot of an action being taken in the Office 365 Security and Compliance dash. An email is being investigated.

Try out these capabilities

Based on feedback from our public preview of these automation capabilities, we extended the Office 365 ATP events and alerts available in the Office 365 Management API to include links to these automated investigations and related artifacts. This helps security teams integrate these automation capabilities into existing security workflow solutions, such as SIEMs.

These capabilities are available as part of the following offerings. We hope you’ll give it a try.

Bringing SecOps efficiency by connecting the dots between disparate threat signals is a key promise of Microsoft Threat Protection. The integration across Microsoft Threat Protection helps bring broad and valuable insights that are critical to the incident response process. Get started with a Microsoft Threat Protection trial if you want to experience the comprehensive and integrated protection that Microsoft Threat Protection provides.

The post Automated incident response in Office 365 ATP now generally available appeared first on Microsoft Security.

Foundations of Microsoft Flow—secure and compliant automation, part 1

Automation services are steadily becoming significant drivers of modern IT, helping improve efficiency and cost effectiveness for organizations. A recent McKinsey survey discovered that “the majority of all respondents (57 percent) say their organizations are at least piloting the automation of processes in one or more business units or functions. Another 38 percent say their organizations have not begun to automate business processes, but nearly half of them say their organizations plan to do so within the next year.”

Automation is no longer a theme of the future, but a necessity of the present, playing a key role in a growing number of IT and user scenarios. As security professionals, you’ll need to recommend an automation service that enables your organization to reap its benefits without sacrificing on strict security and compliance standards.

In our two-part series, we share how Microsoft delivers on the promise of empowering a secure, compliant, and automated organization. In part 1, we provide a quick intro into Microsoft Flow and provide an overview into its best-in-class, secure infrastructure. In part 2, we go deeper into how Flow secures your users and data, as well as enhances the IT experience. We also cover Flow’s privacy and certifications to give you a glimpse into the rigorous compliance protocols the service supports. Let’s get started by introducing you to Flow.

To support the need for secure and compliant automation, Microsoft launched Flow. With Flow, organizations will experience:

  • Seamlessly integrated automation at scale.
  • Accelerated productivity.
  • Secure and compliant automation.

Secure and compliant automation is perhaps the most interesting value of Flow for this audience, but let’s discuss the first two benefits before diving into the third.

Integrated automation at scale

Flow is a Software as a Service (SaaS) automation service used by customers ranging from large enterprises, such as Virgin Atlantic, to smaller organizations, such as G&J Pepsi. Importantly, Flow serves as a foundational pillar for the Microsoft Power Platform, a seamlessly integrated, low-code development platform enabling easier and quicker application development. With Power Platform, organizations analyze data with Power BI, act on data through Microsoft PowerApps, and automate processes using Flow (Figure 1).

Diagram showing app automation driving business processes with Flow. The diagram shows Flow, PowerApps, and Power BI circling CDS, AI Builder, and Data Connectors.

Figure 1. Power Platform offers a seamless loop to deliver your business goals.

Low-code platforms can help scale IT capabilities to create a broader range of application developers—from the citizen to pro developer (Figure 2). With growing burdens on IT, scaling IT through citizen developers who design their own business applications, is a tremendous advantage. Flow is also differentiated from all other automated services because of its native integration with Microsoft 365, Dynamics 365, and Azure.

Image showing Citizen Developers, IT/Admins, and Pro Developers.

Figure 2. Low-code development platforms empower everyone to become a developer, from the citizen developer to the pro developer.

Accelerated productivity

Flow accelerates your organization’s productivity. The productivity benefits from Flow were recently quantified in a Total Economic Impact (TEI) study conducted by Forrester Research and commissioned by Microsoft (The Total Economic Impact™ Of PowerApps And Microsoft Flow, June 2018). Forrester determined that over a three-year period Flow helped organizations reduce application development and application management costs while saving thousands of employee hours (Figure 3).

Image showing 70% for Application development costs, 38% for Application management costs, and +122K for Worker Hours Saved.

Figure 3. Forrester TEI study results on the reduced application development and management costs and total worker hours saved.

Built with security and compliance

Automation will be the backbone for efficiency across much of your IT environment, so choosing the right service can have enormous impact on delivering the best business outcomes. As a security professional, you must ultimately select the service which best balances the benefits from automation with the rigorous security and compliance requirements of your organization. Let’s now dive into how Flow is built on a foundation of security and compliance, so that selecting Flow as your automation service is an easy decision.

A secure infrastructure

Comprehensive security accounts for a wide variety of attack vectors, and since Flow is a SaaS offering, infrastructure security is an important component and where we’ll start. Flow is a global service deployed in datacenters across the world (Figure 4). Security begins with the physical datacenter, which includes perimeter fencing, video cameras, security personnel, secure entrances, and real-time communications networks—continuing from every area of the facility to each server unit. To learn more about how our datacenters are secured, take a virtual tour.

The physical security is complemented with threat management of our cloud ecosystem. Microsoft security teams leverage sophisticated data analytics and machine learning and continuously pen-test against distributed-denial-of-service (DDoS) attacks and other intrusions.

Flow also has the luxury of being the only automation service natively built on Azure which has an architecture designed to secure and protect data. Each datacenter deployment of Flow consists of two clusters:

  • Web Front End (WFE) cluster—A user connects to the WFE before accessing any information in Flow. Servers in the WFE cluster authenticate users with Azure Active Directory (Azure AD), which stores user identities and authorizes access to data. Azure Traffic Manager finds the nearest Flow deployment, and that WFE cluster manages sign-in and authentication.
  • Backend cluster—All subsequent activity and access to data is handled through the back-end cluster. It manages dashboards, visualizations, datasets, reports, data storage, data connections, and data refresh activities. The backend cluster hosts many roles, including Azure API Management, Gateway, Presentation, Data, Background Job Processing, and Data Movement.

Users directly interact only with the Gateway role and Azure API Management, which are accessible through the internet. These roles perform authentication, authorization, distributed denial-of-service (DDoS) protection, bandwidth throttling, load balancing, routing, and other security, performance, and availability functions. There is a distinction between roles users can access and roles only accessible by the system.

Stay tuned for part 2 of our series where we’ll go deeper into how Flow further secures authentication of your users and data, and enhances the IT experience, all while aligning to several regulatory frameworks.

Image showing Microsoft’s global datacenter locations.

Figure 4. Microsoft’s global datacenter locations.

Let Flow enhance your digital transformation

Let your organization start benefiting from one of the most powerful and secure automation services available on the market. Watch the video and follow the instructions to get started with Flow. Be sure to join the growing Flow community and participate in discussions, provide insights, and even influence product roadmap. Also, follow the Flow blog to get news on the latest Flow updates and read our white paper on best practices for deploying Flow in your organization. Be sure to check out part 2 where we dive deeper into how Flow offers the best and broadest security and compliance foundation for any automation service available in the market.

Additional resources

The post Foundations of Microsoft Flow—secure and compliant automation, part 1 appeared first on Microsoft Security.

Beyond the buzzwords

When I was a kid, Gilligan’s Island reruns aired endlessly on TV. The character of the Professor was supposed to sound smart, so he’d use complex words to describe simple concepts. Instead of saying, “I’m nearsighted” he’d say, “My eyes are ametropic and completely refractable.” Sure, it was funny, but it didn’t help people understand his meaning.

Security vendors and professionals suffer from a pinch of “Professor-ism” and often use complex words and terminology to describe simple concepts. Here are few guidelines to consider when naming or describing your products, services, and features:

Assess whether a new term or acronym is needed

Before trying to create a new term or acronym, assess whether an existing one will work. Consider the mobile device space where tools used to manage mobile devices were originally known as MDM for mobile device management. Pretty straightforward. But then the acronym flood started with MAM (mobile application management), MIM (mobile information management), and EMM (enterprise mobile management). It’s true, there are some technical differences between the four, but a quick Bing search shows a raft of articles explaining the differences because it’s not clear to the average customer. And, frankly, all of them are basically subsets of the MDM acronym.

Use acronyms with enthusiasm and clarity

When creating a new term or acronym there is no point in being memorable if the meaning gets lost in the noise. Instead of succumbing to the path of least resistance by forming an acronym, put a little oomph into your naming efforts.

A recent example is SOAR (Security Orchestration, Automation, and Response). Yes, it was a whole new category and one that is adjacent to SIEM (security information and event monitoring) but it adds clarity because it describes a new set of features and functions—like incident response activities and playbooks—which aren’t covered by traditional SIEMs.

Acronyms can save time, but when you get into splintered variants like the MDM example, clarity goes out the window. Since not all acronyms are created equal, go for acronym gold—and make sure there is a recognizable connection to your brand or (even better) the product itself.

This strategy can yield explosive results! Think TNT (Trinitrotoluene), or the more chill TCBY® (The Country’s Best Yogurt), or the zip in ZIP code (Zone Improvement Plan). Compare these zingers with an acronym for something like UDM (Unified Data Management). Sorry—is that the sound of you snoring? (Me, too!)

Put a little pep in your step (and your sales) by producing names that are sharply focused—like laser (Light Amplification by Stimulated Emission of Radiation)—which is an acronym that has become synonymous with what it does and has some well-placed vowels. Another winner in this category is GIF (graphics interchange format). While this acronym wasn’t recognizable out the door, it became synonymous with the product it created by adding a bit of pizzazz to the mix.

Use names that are clear and practical—but catch and hold the imagination

Resist the temptation to take a cool buzzword and tack it onto your marketing efforts to take advantage of the attention. I once saw a basic power strip advertised as “internet ready.” Come on now! Find words or phrases that catch and hold the imagination—while saying something about your product’s functionality.

Sometimes it’s as simple as helping customers understand what the product does: antimalware? Customers are going to get that this probably protects against malware. If the solution really is a new approach, make the name as clear as possible.

In addition, rather than inventing new terms, consider being very practical. Think of the use-cases and ask these questions: What does the solution do for the customer or business? What does the solution deliver? Or what kind of brand experience does your product provide?

Years ago, I ran afoul of a company that advertised itself as “S-OX in a Box” (that’s Sarbanes-Oxley, not a sports or footwear reference), because I wrote a piece on the complexity of the tech side of S-OX compliance. I explained why it wasn’t as simple as buying a “S-OX in a BOX” solution. I wasn’t trying to call out that specific company, but rather to show why it can be better to be clear and explicit about what a solution does. S-OX is too complex for a single solution to do it all. But a tool that can help automate S-OX compliance reporting? That, for many companies, is a big win.

Also, think about the non-cyber world—where companies describe the function to discover an evocative name. Examples of everyday products that accomplish this include bubble wrap, Chapstick®, Crock-Pot®, and Onesie®. Not all first tries will be winners. For example, the breathalyzer was originally known as the Drunk-O-Meter. Just experiment with it. Have some fun. Make it meaningful to your client or customer.

Never overpromise

Promising customers that they will never have a breach again is a pretty lofty claim. And most likely impossible. Words like absolute, perfect, and unhackable may sound good in copy, but can you guarantee a product or solution really deliver absolute security?

Savvy customers know that security is about risk management and tradeoffs and that no solution is completely immune to all attacks. Rather than overpromise, consider helping the customer understand what the solution does. Does the product protect against a breach by monitoring the database? Good, then say that.

Get creative and mix it up

Get creative by mixing initials and non-initial letters, as in “radar” (RAdio Detection And Ranging). Or try “initialism,” which requires you pronounce your abbreviation as a string of separate letters. Examples include OEM (original equipment manufacturing) and the BBC (British Broadcasting Corporation). You can also incorporate a shortcut into the name by combining numbers and letters like 3M (Minnesota Mining and Manufacturing Company).

If you’re really stuck, try a backronym

A backronym is created when you turn a word into an acronym by assigning each letter a word of its own—after a term is already in use. For example, the term “rap” (as in rap music) is a backronym for rhythm and poetry and SOAR is a backronym for Security Orchestration, Automation, and Response.

If you want something closer to the technology realm, check out what NASA (a well-known acronym for National Aeronautics and Space Administration) did. They named a space station treadmill in honor of comedian Stephen Colbert by coming up with the words to spell out his name: Combined Operational Load-Bearing External Resistance Treadmill (COLBERT).

Find your sweet spot

When it comes to using common words to describe uncommon things, combine the freshness and friendliness of Mary Ann and with the profit mindset of Thurston Howell III to come up with names that intrigue people with their relatability and nail the sale because clients and customers get a clear idea of the product’s business value.

Reach out to me on LinkedIn or Twitter and let me know what you’d like to see us cover as we talk about new security products and capabilities.

The post Beyond the buzzwords appeared first on Microsoft Security.

Improve security and simplify operations with Windows Defender Antivirus + Morphisec

My team at Morphisec (a Microsoft Intelligent Security Association (MISA) partner) often talks with security professionals who are well-informed about the latest cyberthreats and have a longterm security strategy. The problem many of them face is how to create a stronger endpoint stack with limited resources. Towne Properties is a great example. We recently helped them simplify operations and increase endpoint security with Windows Defender Antivirus and Morphisec for advanced threat prevention.

The challenge: increase endpoint security and simplify operations

Towne Properties is a leading commercial and residential property management company in the Midwest. Our customer, Bill Salyers, the IT Director at Towne Properties, recently migrated the company to Windows 10 to adopt its embedded security features, including Windows Defender Antivirus. Yet he remained concerned about advanced zero-day attacks that bypass antivirus solutions and cause damage to the firm and its clients.

When we met Bill, Towne Properties used a commercial third-party antivirus. The product protects against known attacks, but it didn’t prevent zero-day, evasive memory attacks, which are increasing at a rapid rate. Bill needed to address this gap in his endpoint protection but couldn’t deploy another security detection tool given the lean composition of his security team. They just didn’t have the resources and bandwidth to manage another tool. Bill required better endpoint protection and simplified operations.

“At Towne, our goal is to make our endpoints as secure as possible from advanced threats, while simplifying our environment and maintaining fixed budgets.”
—Bill Salyers, IT Director, Towne Properties

Windows Defender Antivirus provides built-in endpoint protection

When we learned that Towne Properties needed a lightweight solution that would improve endpoint protection, we reintroduced Bill to Windows Defender Antivirus. Built into Windows 10, Windows Defender Antivirus protects endpoints against known software threats like viruses, malware, and spyware across email, apps, the cloud, and the web.

Bill performed a thorough evaluation of Windows Defender Antivirus and was thrilled to find that it compared favorably in terms of efficacy and capabilities to their incumbent third-party antivirus. With no installation required or new interface to learn, his team was able to quickly eliminate a third-party tool and reduce their total cost of ownership (TCO).

“Windows Defender Antivirus met all our requirements at no incremental cost. We replaced our third-party antivirus without sacrifice.”
—Bill Salyers, IT Director, Towne Properties

Screenshot of the Morphisec Moving Target Defense dashboard.

Morphisec adds a new layer of prevention

The money Bill saved dropping the third-party antivirus gave him more flexibility to address zero-days and memory-based attacks. He invested in Morphisec, which is based on their highly innovative Moving Target Defense technology. Morphisec Moving Target Defense stops unknown attacks by morphing critical assets to make them inaccessible to the adversary and killing the attack pre-execution. Morphisec is integrated with Windows Defender Antivirus and extends Towne Properties’ endpoint protection to include zero-days, advanced memory-based threats, malicious documents, and browser-based attacks. It’s lightweight and easy to manage, which is important to Bill. The integration with Windows Defender Antivirus allowed Towne to achieve both better protection and simpler operational management with visibility through a single pane of glass.

Infographic which reads: Endpoint Application; Keyless, one-way randomization each time an application loads; application memory (both original and morphed).

Figure 1: As an application loads to the memory space, Morphisec morphs the process structures, making the memory constantly unpredictable to attackers (Source: Morphisec website).

Infographic which reads: Endpoint Application; Malicious code injection; legitimate code runs seamlessly with the morphed application structure; call to original resources exposes and traps the attack; Skeleton/Trap; and Application memory (morphed).

Figure 2: Legitimate application code memory is dynamically updated to use the morphed resources; applications load and run as usual while a skeleton of the original structure is left as a trap. Attacks target the original structure, fail to execute, and are trapped.

“We chose Morphisec because Moving Target Defense’s highly innovative approach prevents the most dangerous unknown memory-based attacks.”
—Bill Salyers, IT Director, Towne Properties

The Morphisec and Microsoft partnership supports Towne Properties’ cybersecurity roadmap

One reason Bill and his management team were so enthusiastic about Morphisec and Windows Defender Antivirus is because it supports their overall security plan. Towne Properties is a Microsoft shop aligned with the Microsoft cybersecurity strategy. Morphisec also integrates with Microsoft Defender Advanced Threat Protection (ATP), which allows Towne Properties to seamlessly chart their Microsoft and Morphisec journey.

“It was also important to learn how Microsoft has partnered closely with Morphisec. Morphisec integrates with Microsoft Defender ATP, giving us high confidence to continue down the Microsoft and Morphisec journey.”
—Justin Hall, Security Specialist, Towne Properties

Windows Defender Antivirus and Morphisec Moving Target Defense are better together

Windows Defender Antivirus and Morphisec Moving Target Defense offer the following features:

Windows Defender Antivirus:

  • Delivers leading machine learning and behavior-based antimalware and threat protection.
  • Is built into Windows 10 at no additional cost.
  • Requires no installation—just turn on features in Windows 10.

Morphisec Moving Target Defense:

  • Delivers an entirely new layer of deterministic prevention against the most advanced and most damaging threats to the enterprise, including unknown attacks, zero-days, ransomware, evasive fileless attacks, and web-borne attacks.
  • Simple to manage and extremely lightweight with zero impact on operations.
  • Virtually patches vulnerabilities.
  • Integrates with Microsoft Defender ATP to visualize attacks prevented by Morphisec and incorporate threats identified by Morphisec in the Microsoft Defender ATP dashboard.

Morphisec + Microsoft:

  • Provides superior endpoint protection at an affordable cost.
  • Is simple to deploy, manage, and maintain.

“Morphisec with Windows Defender Antivirus offers a truly set it and forget it solution. Morphisec’s lightweight design coupled with Windows Defender Antivirus provides strong endpoint security, the best value, and a simpler operational environment.”
—Bill Salyers, IT Director, Towne Properties

Learn more

The post Improve security and simplify operations with Windows Defender Antivirus + Morphisec appeared first on Microsoft Security.

One simple action you can take to prevent 99.9 percent of attacks on your accounts

There are over 300 million fraudulent sign-in attempts to our cloud services every day. Cyberattacks aren’t slowing down, and it’s worth noting that many attacks have been successful without the use of advanced technology. All it takes is one compromised credential or one legacy application to cause a data breach. This underscores how critical it is to ensure password security and strong authentication. Read on to learn about common vulnerabilities and the single action you can take to protect your accounts from attacks.

Animated image showing the number of malware attacks and data breaches organizations face every day. 4,000 daily ransomware attacks. 300,000,000 fraudulent sign-in attempts. 167,000,000 daily malware attacks. 81% of breaches are caused by credential theft. 73% of passwords are duplicates. 50% of employees use apps that aren't approved by the enterprise. 99.9% of attacks can be blocked with multi-factor authentication.

Common vulnerabilities

In a recent paper from the SANS Software Security Institute, the most common vulnerabilities include:

  • Business email compromise, where an attacker gains access to a corporate email account, such as through phishing or spoofing, and uses it to exploit the system and steal money. Accounts that are protected with only a password are easy targets.
  • Legacy protocols can create a major vulnerability because applications that use basic protocols, such as SMTP, were not designed to manage Multi-Factor Authentication (MFA). So even if you require MFA for most use cases, attackers will search for opportunities to use outdated browsers or email applications to force the use of less secure protocols.
  • Password reuse, where password spray and credential stuffing attacks come into play. Common passwords and credentials compromised by attackers in public breaches are used against corporate accounts to try to gain access. Considering that up to 73 percent of passwords are duplicates, this has been a successful strategy for many attackers and it’s easy to do.

What you can do to protect your company

You can help prevent some of these attacks by banning the use of bad passwords, blocking legacy authentication, and training employees on phishing. However, one of the best things you can do is to just turn on MFA. By providing an extra barrier and layer of security that makes it incredibly difficult for attackers to get past, MFA can block over 99.9 percent of account compromise attacks. With MFA, knowing or cracking the password won’t be enough to gain access. To learn more, read Your Pa$$word doesn’t matter.

MFA is easier than you think

According to the SANS Software Security Institute there are two primary obstacles to adopting MFA implementations today:

  1. Misconception that MFA requires external hardware devices.
  2. Concern about potential user disruption or concern over what may break.

Matt Bromiley, SANS Digital Forensics and Incident Response instructor, says, “It doesn’t have to be an all-or-nothing approach. There are different approaches your organization could use to limit the disruption while moving to a more advanced state of authentication.” These include a role-based or by application approach—starting with a small group and expanding from there. Bret Arsenault shares his advice on transitioning to a passwordless model in Preparing your enterprise to eliminate passwords.

Take a leap and go passwordless

Industry protocols such as WebAuthn and CTAP2, ratified in 2018, have made it possible to remove passwords from the equation altogether. These standards, collectively known as the FIDO2 standard, ensure that user credentials are protected end-to-end and strengthen the entire security chain. The use of biometrics has become more mainstream, popularized on mobile devices and laptops, so it’s a familiar technology for many users and one that is often preferred to passwords anyway. Passwordless authentication technologies are not only more convenient for people but are extremely difficult and costly for hackers to compromise. Learn more about Microsoft passwordless authentication solutions in a variety of form factors to meet user needs.

Convince your boss

Download the SANS white paper Bye Bye Passwords: New Ways to Authenticate to read more on guidance for companies ready to take the next step to better protect their environments from password risk. Remember, talk is easy, action gets results!

The post One simple action you can take to prevent 99.9 percent of attacks on your accounts appeared first on Microsoft Security.