Category Archives: Risk Management

These Four Communication Tips Could Improve Your Cybersecurity Reporting

With cybersecurity incidents on the rise, the chief information security officer (CISO) needs to be in direct communication with the corporate board. But there is often a vast separation between what the board understands about security and what the security department understands about business priorities.

As a result, there is a significant disconnect between the security leader’s priorities and the board’s agenda in many organizations. What’s driving this communication breakdown — and how can CISOs bridge this gap to improve cybersecurity reporting?

Communicating Across the Great Divide

A 2017 report from risk management firm Focal Point Data Risk found that board executives’ most pressing concerns often fall to the bottom of the CISOs’ agenda. For example, board members cited data protection as the aspect of security that provides the most value to the business. CISOs, on the other hand, pointed to security guidance.

Meanwhile, 42 percent of CISOs said they were confident about the effectiveness of their organization’s security program — while board directors stated the opposite. Eighty percent of executives surveyed also ranked risk posture as the most important metric for reporting security — less than 20 percent of CISOs thought the same.

Four Cybersecurity Reporting Tips for CISOs

With this disconnect hounding security leaders, how can CISOs present crucial information and communicate security’s needs and priorities to board leadership? Below are four tips security leaders should consider when prepping for their next board presentation.

1. Learn to Speak the Language of Business

“Our advice to our CISO clients is put yourself in your board executives’ shoes and talk to them in their language,” said Grant Wernick, CEO and co-founder of Insight Engines, which develops cybersecurity products using natural language understanding. “Have a business conversation that demonstrates how you and your team are not only increasing the company’s security posture, but also enabling key business priorities.”

Of course, that means the CISO must have an understanding of the organization’s key business priorities before going into the presentation. Unfortunately, that isn’t always the case for many security leaders, according to Phil Gardner, CEO of the Institute for Applied Network Security (IANS), a security consulting and research firm.

“From an informal straw poll that I’ve been conducting, I’ve learned that 60 percent of CISOs can’t articulate their CEO’s top three to five business priorities,” Gardner wrote in a recent blog post. “When you don’t know the business leaders’ priorities, making InfoSec relevant is nearly impossible. Aligning with the CEO’s business priorities forces InfoSec to work on initiatives that drive enterprise value. This, in turn, increases your clout with the board.”

2. Know the Three R’s: Reputation, Regulation and Revenue

While you are honing your business language skills, you should brush up on your topics too. According to Harry Sverdlove, chief technology officer (CTO) at cloud security provider Edgewise Networks, CISOs can get the board’s attention by focusing on the three R’s: reputation, regulation and revenue.

Look at regulation as an opportunity,” Sverdlove said. “As security professionals, we know regulation is not security, but it raises the awareness of it so you can discuss it with the board.”

Another useful tactic is to refer to news headlines, Sverdlove said, because that will speak to the board’s concern about the potential impact of a data breach on both the organization’s reputation and its bottom line.

“No one wants to be on the front page of The New York Times,” Sverdlove said. “They are concerned about being a headline and they want to know what risks could get them there. Speak to the board on topics they understand. They understand corporate reputation.”

3. Be the Bearer of Good News

A common perception is that security is the department of doom and gloom, but surely there is some positive news to report. CISOs will need to occasionally report on the bright side of security if they want to gain the board’s respect.

“You have to be able to communicate progress,” Sverdlove said. “It’s essential to be able to say, ‘Here is our progress on the projects we’ve been proposing,’ and communicate that plainly and in business terms. In security, we like to give code names and we like to talk jargon. But the board wants to clearly hear about business impact and the customer impact of each proposal.”

It’s equally important to demonstrate an ability and willingness to challenge security conventions. Failure to do so often puts the organization at risk because the threat landscape is continually evolving and adopting new tactics to circumvent traditional security methods.

“Give the board assurance that you are thinking outside the box — not complacent with maintaining status quo,” Wernick advised. “From a security perspective, people are following standard ways of doing things — in a static and reactive approach — which increases risk. “

4. Consider Your Context

What tools and visuals are you using to present to the board? Wernick suggested conveying your results in ways that board directors are used to seeing.

“Many board discussions center around risks, and a familiar framework is a risk heat map,” Wernick said. “We suggest to our CISO clients that they demonstrate results in a similar visual fashion. For example, we worked with clients on a data health heat map that visually tells the story of use cases supported, data source coverage, et cetera. Even if the board doesn’t understand the nitty-gritty details, they will relate to the heat map visual framework.”

Gardner noted in his blog post that external standards, such as the National Institute of Standards and Technology (NIST) framework for board reporting, have grown outdated and called for new reporting methods.

“What savvy board members really want is a financial articulation of the risks being reduced through the company’s InfoSec expenditures,” Gardner wrote. “They want an InfoSec ROI [return on investment].”

Gardner advised security leaders to partner with the corporate chief financial officer (CFO) to assign values to the organization’s significant intellectual property (IP) assets and then seek support from peers and the board on these values.

“Building consensus on the values of these intangible assets will generate more meaningful conversations about where to best deploy scarce InfoSec resources,” Gardner wrote.

Security Culture Starts From the Top

Reporting to the board is one of the CISO’s most crucial responsibilities. After all, the board makes the final decisions regarding security budget, investments and initiatives.

Just as complex business principles are often nebulous to IT leaders, when it comes to cybersecurity, business executives only know what the CISO tells them. By contextualizing security needs in terms the board can easily understand, CISOs can help their organizations develop a strong security culture from the top down.

Listen to the podcast series: Take Back Control of Your Cybersecurity Now

The post These Four Communication Tips Could Improve Your Cybersecurity Reporting appeared first on Security Intelligence.

SecurityWeek RSS Feed: Australia Agrees Solomons Internet Cable After China Concern

Australia will help fund and build an underseas communications cable to the Solomon Islands, it was agreed Wednesday, after the Pacific nation was convinced to drop a contract with Chinese company Huawei. 

read more



SecurityWeek RSS Feed

Mobile App Security Risky Across Sectors

While mobile app security is an issue across all sectors, 50% of apps that come from media and entertainment businesses are putting users at risk. New research from BitSight found that a

The post Mobile App Security Risky Across Sectors appeared first on The Cyber Security Place.

Despite Growing Awareness, CIOs Struggle With Cybersecurity Risk Management, Survey Reveals

While much of what happens in a modern business depends on how data moves back and forth across the corporate network, concern about network security has risen by 71 percent in the past year, according to a recent survey of chief information officers (CIOs). Despite this growing awareness, however, only 22 percent of respondents said they felt prepared for a cyberattack.

The report showed that the role of IT leaders, which includes everything from selecting hardware and software applications to digitizing business processes, is more difficult than ever thanks to the ever-expanding list of cybersecurity risk management challenges. In fact, 78 percent of chief information officers (CIOs) described the systems they use for cybersecurity risk management as only “moderately effective.”

Cybersecurity Risk Management Lags Despite Growing Concern

The findings of the “KPMG/Harvey Nash CIO Survey 2018” reflect how security leaders’ perception of data protection has changed given the evolution of cybercrime from random acts of information theft to sophisticated malware, ransomware and distributed denial-of-service (DDoS) attacks. For instance, 77 percent of survey respondents cited the threat of organized cybercrime as their greatest concern.

The survey results revealed a disconnect between the number of CIOs who are worried about their ability to defend corporate networks against malicious third parties and insider threats and the number of security leaders who are taking meaningful action. While 23 percent of respondents said they have increased their emphasis on security since 2017, the number of CIOs who cited managing risk and compliance as an area of focus rose by only 12 percent.

The Skills Gap and GDPR Create New Risk Management Challenges

The report suggested that the cybersecurity skills shortage might be contributing to this disconnect. The dearth of security and resilience skills, for instance, increased by 25 percent year-over-year. The good news, according to the report, is that cybersecurity risk management is quickly becoming a top priority for board directors.

It’s also worth noting that the report’s authors conducted their research as the General Data Protection Regulation (GDPR) was about to take effect. Despite all the cybersecurity risk management requirements included in the regulation, 38 percent of survey respondents admitted that they would not be ready for the since-passed deadline.

The post Despite Growing Awareness, CIOs Struggle With Cybersecurity Risk Management, Survey Reveals appeared first on Security Intelligence.

Security newsround: June 2018

We round up reporting and research from across the web about the latest security news and developments. This month: help at hand for GDPR laggards, try and efail, biometrics blues, and calls for a router reboot as VPNFilter strikes.

Good data protection resources (see what we did there?)

Despite a very well flagged two-year countdown towards GDPR, the eleventh-hour scramble suggests many organisations were still unprepared. And let’s not forget that May 25 wasn’t a deadline but the start of an ongoing compliance process. Fortunately, there are some excellent resources to help, and we’ve rounded them up here.

This blog from Ireland’s deputy data protection commissioner debunks the widely – and wrongly – held theory of a bedding-in period before enforcement. The post also clarifies how organisations can mitigate the potential consequences of non-compliance with GDPR. Meanwhile the Irish Data Protection Bill passed a vote in the Dail in time for the regulation. You can read the bill in full, if that’s your thing, by clicking here.

In the UK, the Information Commissioner’s Office has produced in-depth guidance on consent for processing information. Specifically, when to apply consent and when to look for alternatives. (Plug: our COO Valerie Lyons wrote a fine blog on the very same subject here.) Together with the National Cyber Security Centre, the ICO also developed guidance to describe a set of technical security outcomes that are considered to represent appropriate measures under the GDPR.

The European Data Protection Board (EDPB), formerly known as the Article 29 Working Party, was quickly into action after 25 May. It published guidelines (PDF) on certification mechanisms under the regulation. This establishes the rules by which certification can take place, as proof of compliance with GDPR.

Finally, for an interesting US perspective on the regulation, here’s AlienVault CISO John McLeod. “Every company should prepare for “Right to be Forgotten” requests, which could present operational and compliance issues,” he said.

Do the hack-a

World Rugby suffered a data breach which saw attackers obtain personal details for thousands of subscribers. The data included the first name, email address and encrypted passwords of thousands of users, including players, coaches and parents worldwide. The Sunday Telegraph broke the story, with an interesting take on the news. The breach may have been a random incident but it’s also possible it was a targeted attack. Potential culprits might be one of the groups that previously leaked information from sporting bodies like WADA and the IAAF. Rugby’s governing body discovered the incident in early May, and took down the affected website to conduct more examinations. World Rugby is based in Dublin, and as a result it informed the Data Protection Commissioner about the breach. How would you handle a breach on that scale? Read our 10 steps to better post-breach incident response.

Efail: an email encryption problem or a vulnerability disclosure problem?

A group of researchers in Germany recently published details of critical flaws in PGP/GPG and S/MIME email encryption. They warned that the vulnerabilities could decrypt previously encrypted emails, including sensitive messages sent in the past. Conforming to the security industry’s love of a catchy name (see also: Heartbleed, Shellshock), the researchers dubbed the flaw Efail.

It was the cue for urgent warnings from EFF.org among others, to stop using email encryption tools. As the researchers put it: “EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.” The full technical research paper is here, while there’s a website with a Q&A here.

As the story moved on, it emerged that the problem lay more with how some email clients rendered messages. Motherboard’s snarky but well-informed take quoted Johns Hopkins University cryptography professor Matthew Green. He described the exploit as “an extremely cool attack and kind of a masterpiece in exploiting bad crypto, combined with a whole lot of sloppiness on the part of mail client developers.” ProtonMail, the world’s largest secure email service, was scathing about the news. After performing a deep analysis, it said its own client was not vulnerable, nor was the PGP protocol broken.

So what are the big lessons from this story? Distraction is a risk in security. Some security professionals may have rushed to react to Efail even if they didn’t need to. Curtis Franklin’s summary for Dark Reading observed that many enterprise IT teams have either moved away from PGP and S/MIME, or never used them. Noting the criticism of how the researchers published the vulnerabilities, Brian Honan wrote that ENISA, the European Union Agency for Network and Information Security, published excellent good practice for vulnerability disclosure.

Biometrics blues as police recognition tech loses face

There was bad news for fans of dystopian sci-fi as police facial recognition systems for nabbing bad guys proved unreliable. Big Brother Watch claimed the Metropolitan Police’s automated facial recognition technology misidentified innocent people as wanted criminals more than nine times out of 10. The civil liberties group Big Brother Watch presented a report to the UK parliament about the technology’s shortcomings. Among its findings was the high false positive rate. Police forces have supported facial biometrics as a tool to help them combat crime. Privacy advocates described the technology’s use as “dangerously authoritarian”. As noted on our blog, this isn’t the first time a UK organisation has tried to introduce biometrics.

Router reboot alert

Malware called VPNFilter has infected 500,000 routers worldwide, and the net seems to be widening. Cisco Talos Intelligence researchers first revealed the malware, which hijacked devices in more than 54 countries but primarily in Ukraine. “The VPNFilter malware is a multi-stage, modular platform with versatile capabilities to support both intelligence-collection and destructive cyber attack operations,” the researchers said. VPNFilter can snoop on traffic, steal website credentials, monitor Modbus SCADA protocols, and has the capacity to damage or brick infected devices.

Sophos has a useful plain English summary of VPNFilter and what it can do. The malware affected products from a wide range of manufacturers, including Linksys, Netgear, Mikrotik, Q-Nap and TP-Link. In a later update, Talos said some products from Asus, D-Link, Huawei, Ubiquiti, UPVEL, and ZTE were also at risk. As the malware’s payload became apparent, the FBI advised router owners to reboot their devices. This story shows that it’s always worth checking your organisation’s current risk with a security assessment.

 

The post Security newsround: June 2018 appeared first on BH Consulting.

How to Establish Effective Intelligence Requirements

Behind every successful intelligence operation is a set of well-curated intelligence requirements (IRs). Not only do IRs lay the foundation and set the direction of an intelligence operation, they enable teams to prioritize needs, allocate resources, determine data sources, and establish the types of analysis and expertise required to process that data into intelligence.

read more

Application Security Attacks: Will New NYDFS Regulation Protect NYC Financial Institutions?

You know banks and related financial institutions are primary targets for cyberattacks and other security threats. In fact, notorious 20th-century bank robber Willie Sutton famously said he robbed banks “because that’s where the money is.”

Times really haven’t changed much since then. Even as IT security is tightened, attackers are finding more innovative ways to target financial institutions — which is why it’s imperative to upgrade IT security systems and application security programs regularly.

The banking, financial services and insurance (BFSI) sector is impacted by various regulations that protect such organizations and their customers from potential cyberthreats. The New York Department of Financial Services (NYDFS) introduced a regulation called 23 NYCRR Part 500 for banks, insurers and other financial institutions that operate in New York City. The regulation requires each company to “assess its specific risk-based profile and to tailor a program that addresses the risks identified by self-assessment.”

NYDFS Regulation Aims to Bolster Financial Cybersecurity

The regulation initially came into effect on March 1, 2017 — and it’s the first in the U.S. to mandate such protection by banks, insurers and other financial institutions within the NYDFS’s regulatory jurisdiction. Its overarching goal is to protect institutions’ customer information from potential cyberattacks. Entities impacted by the legislation are required to be in compliance by March 1, 2019.

The legislation specifically addresses several compliance areas, including maintenance of a cybersecurity policy; retention of a chief information security officer (CISO) and other qualified personnel; and the establishment of a written incident response (IR) plan.

In the area of application security, the directive states:

  1. “Each Covered Entity’s cybersecurity program shall include written procedures, guidelines and standards designed to ensure the use of secure development practices for in-house developed applications utilized by the Covered Entity”; and
  2. “All such procedures, guidelines and standards shall be periodically reviewed, assessed and updated as necessary by the CISO (or a qualified designee) of the Covered Entity.”

Don’t Sleep on Application Security

One aspect that’s often neglected during IT security implementation is the importance of securing your organization’s applications. During the development stage, security too often slips through the cracks. However, application security is imperative to protect your organization from security threats.

Your applications house vital, mission-critical data and any security breach could cause significant damage and disruption to your organization and its reputation. Still, security is often lost in the mad dash to accelerate application delivery.

For banks and other financial institutions, application security is even more critical and could become an area of vulnerability if left unaddressed.

With the need to keep track of all of these mind-boggling requirements, you might be wondering where to begin. For starters, security leaders should invest in an IR platform to effectively orchestrate and automate their response and cyber resiliency processes. CISOs must also prepare themselves — and their teams — to deal with myriad IT security issues, such as inadvertent insider threats.

To specifically address your organization’s potential application security challenges, register now for complimentary trials of IBM Security AppScan and IBM Application Security on Cloud. Find out how you can conveniently manage application security risk. IBM’s complimentary risk management e-guide also provides practical guidance to address application security risk more effectively. You can apply lessons learned in the e-guide to all of your current IT security initiatives.

Read the complete e-guide: Five Steps to Achieve Risk-based Application Security Management

The post Application Security Attacks: Will New NYDFS Regulation Protect NYC Financial Institutions? appeared first on Security Intelligence.

Jump-Start Your Management of Known Vulnerabilities

Organizations must manage known vulnerabilities in web applications. When it comes to application security, the Open Web Application Security Project (OWASP) Foundation Top 10 is the primary source to start reviewing and testing applications.

The OWASP Foundation list brings some important questions to mind: Which vulnerability in the OWASP Foundation Top 10 has been the root of most security breaches? Which vulnerability among the OWASP Foundation Top 10 has the highest likelihood of being exploited?

While “known vulnerable components” comes in at number nine on the list, it’s the weakness that is most often exploited, according to security firm Snyk. The OWASP Foundation stressed on its website, however, that the issue was still widespread and prevalent: “Depending on the assets you are protecting, perhaps this risk should be at the top of the list.”

So, how can these known vulnerabilities be managed?

Vulnerable Components Can Lead to Breaches

Components in this context are libraries that provide the framework and other functionality in an application. Many cyberattacks and breaches are because of vulnerable components, a trend that will likely continue, according to Infosecurity Magazine.

Recent examples include the following:

  • Panama Papers: The Panama Papers breach was one of the largest-ever breaches in terms of volume of information leaked. The root cause was an older version of Drupal, a popular web server software, as noted by Forbes.
  • Equifax: The Equifax breach was one of the most severe data breaches because of the amount of highly sensitive data it leaked, as noted by Forbes. The root cause was an older version of Apache Struts.

Often, this vulnerability is not given the attention it requires. Many organizations may not even have a proper inventory of dependent libraries. Static code analysis or vulnerability scans usually don’t report components with known vulnerabilities. In many cases, the component versions would have reached their “end of life,” but were still in use.

It’s also worth considering the complexity of managing component licenses. There are many open source licenses with varying terms and conditions. Some licenses are permissive and some are permissive with conditions (strong or weak copyleft). The Open Source Initiative (OSI) lists more than 80 approved licenses.

Most Components Are Older Versions With Known Vulnerabilities

Synopsys reported that more than 60 percent of libraries being used are older versions that have known vulnerabilities. If we take a deep look at our applications component profile, this may not be an exaggeration. Most of the web applications running today are using open source components in some way or another.

The popular open source frameworks for web applications include:

  • Struts
  • Spring MVC
  • Spring Boot
  • MyFaces
  • Hibernate in Java
  • Angular
  • Node.js in JavaScript
  • CSLA framework in .NET
  • Many PHP, Python and Ruby frameworks

There are also many object-relational mapping components, reporting tools, message broker components and a plethora of other utility components to consider. These components provide great advantages to organizations concerning cost-benefit, being future ready and helping foster digital transformation. They also have a wide developer base to develop and maintain components actively.

But are you using an older version of these components? Do they have reported vulnerabilities? Common Vulnerabilities and Exposures (CVE) in components are listed in CVE MITRE and National Vulnerability Database (NVD).

More Than 80 Types of Various Open Source Licenses

Managing open source licenses is an important activity for an organization’s open source strategy and legal and standard compliance programs. Managing licenses for components can be complex. Due care must be given to note the license version, as some may have significantly different terms and conditions from one version to another. Developers may add open source libraries to applications without giving much thought about licenses.

The perception is that open source is “free.” However, the fact is it’s “free” with conditions attached to its usage.

If we review the license clauses carefully, the requirements are more stringent when it comes to distributed software. Reviewing the license requirements of a component will also include reviewing the licenses of transitive dependencies or pedigrees — the components on which it is built. Open source compliance programs usually cover software installed on machines but may not cover the libraries used by web applications.

Automate to Identify Components With Known Vulnerability and License Risks

NVD uses Common Platform Enumeration (CPE) as the structured naming scheme for information technology (IT) systems, software and packages. The tools that automate the process get the CPE dictionary and CVE feed from NVD. The feeds are available in JSON or XML formats. The tools parse the feeds and scan through them with the CPE to provide reports.

The OWASP provides Dependency-Check, which identifies reported vulnerabilities in project dependencies. It’s easy to use and can be used in command line or integrated into the build process. It has plug-ins for popular build management tools, including Maven, Ant, Gradle and Jenkins. The build tool Maven has a “site” plug-in. By running “mvn site” command it produces the application specific report, which also shows the license information on dependencies.

There are many other commercial tools with more sophisticated functionality — beyond vulnerability identification and listing licenses. There are sources other than NVD and CVE MITRE, which provide details on known bugs, such as RubySec, Node Security Platform and many bug tracking systems.

IBM Application Security on Cloud has an Open Source Analyzer to identify component vulnerabilities. It’s recommended to integrate the tools in the build process, so the component profile is taken at the earliest stage of the development phase. This allows users to monitor the component profile during maintenance and enhancements.

Addressing Component Issues: Upgrade, Replace or Migrate

The most important step in managing open source licenses is to have a policy on acceptable licenses. The policy has to be created in consultation with your legal department. The policy should be reviewed periodically and kept up-to-date. Building an inventory of components is also important.

Once the components are identified as being vulnerable or not, and that they comply with license policy, addressing them is context specific. You can either upgrade to the latest version or replace them with alternates. This requires a risk-based approach and planning. Framework upgrades — or moving to a different framework or technology — could result in significant development efforts. The approach has to be decided based on risk and cost, considering all alternate deployment models and technologies.

Upgrading components or migrating can be rewarding. In addition to addressing security issues, it can provide an opportunity to improve the performance of the applications and address compatibility issues because of older component versions.

Component management is a continuous process, as vulnerabilities get frequently reported — even in the latest of versions. Obviously, it’s not practical to upgrade or migrate each time an issue is reported — often patches (minor versions upgrades) will be available to address the issues. Component management should be given adequate consideration and must be an integral part of an organization’s application security and compliance programs.

The post Jump-Start Your Management of Known Vulnerabilities appeared first on Security Intelligence.

A Step-By-Step Guide to Vulnerability Assessment

Sometimes, security professionals don’t know how to approach a vulnerability assessment, especially when it comes to dealing with results from its automated report. Yet, this process can be of value to an organization.

Besides information revealed from the results, the process itself is an excellent opportunity to get a strategic perspective regarding possible cybersecurity threats. First, however, we need to understand how to put the right pieces in place to get real value from a vulnerability assessment.

A Four-Step Guide to Vulnerability Assessment

Here is a proposed four-step method to start an effective vulnerability assessment process using any automated or manual tool.

1. Initial Assessment

Identify the assets and define the risk and critical value for each device (based on the client input), such as a security assessment vulnerability scanner. It’s important to identify at least the importance of the device that you have on your network or at least the devices that you’ll test. It’s also important to understand if the device (or devices) can be accessed by any member of your company (such as a public computer or a kiosk) or just administrators and authorized users.

Understand the strategic factors and have a clear understanding of details, including:

  • Risk appetite
  • Risk tolerance level
  • Risk mitigation practices and policies for each device
  • Residual risk treatment
  • Countermeasures for each device or service (if the service is correlated with the device)
  • Business impact analysis

2. System Baseline Definition

Second, gather information about the systems before the vulnerability assessment. At least review if the device has open ports, processes and services that shouldn’t be opened. Also, understand the approved drivers and software (that should be installed on the device) and the basic configuration of each device (if the device is a perimeter device, it shouldn’t have a default administrator username configured).

Try to perform a banner grabbing or learn what kind of “public” information should be accessible based on the configuration baseline. Does the device send logs into a security information and event management (SIEM) platform? Are the logs at least stored in a central repository? Gather public information and vulnerabilities regarding the device platform, version, vendor and other relevant details.

3. Perform the Vulnerability Scan

Third, Use the right policy on your scanner to accomplish the desired results. Prior to starting the vulnerability scan, look for any compliance requirements based on your company’s posture and business, and know the best time and date to perform the scan. It’s important to recognize the client industry context and determine if the scan can be performed all at once or if a segmentation is needed. An important step is to re-define and get the approval of the policy for the vulnerability scan to be performed.

For the best results, use related tools and plug-ins on the vulnerability assessment platform, such as:

  • Best scan (i.e., popular ports)
  • CMS web scan (Joomla, WordPress, Drupal, general CMS, etc.)
  • Quick scan
  • Most common ports best scan (i.e., 65,535 ports)
  • Firewall scan
  • Stealth scan
  • Aggressive scan
  • Full scan, exploits and distributed denial-of-service (DDoS) attacks
  • Open Web Application Security Project (OWASP) Top 10 Scan, OWASP Checks
  • Payment Card Industry Data Security Standard (PCI DSS) preparation for web applications
  • Health Insurance Portability and Accountability Act (HIPAA) policy scan for compliance

In case you need to perform a manual scan for the critical assets to ensure the best results, be sure to configure the credentials on the scanner configuration to perform a better and deeper vulnerability assessment (if the credentials are shared with the team).

4. Vulnerability Assessment Report Creation

The fourth and most important step is the report creation. Pay attention to the details and try to add extra value on the recommendations phase. To get real value from the final report, add recommendations based on the initial assessment goals.

Also, add risk mitigation techniques based on the criticalness of the assets and results. Add findings related to any possible gap between the results and the system baseline definition (deviations in any misconfiguration and discoveries made), and recommendations to correct the deviations and mitigate possible vulnerabilities. Findings on the vulnerability assessment are normally very useful and are ordered in a way to ensure the understanding of the finding.

However, it’s important to keep the following details in mind and realize that high and medium vulnerabilities should have a detailed report that may include:

  • The name of vulnerability
  • The date of discovery
  • The score, based on Common Vulnerabilities and Exposures (CVE) databases
  • A detailed description of the vulnerability
  • Details regarding the affected systems
  • Details regarding the process to correct the vulnerability
  • A proof of concept (PoC) of the vulnerability for the system (if possible)
  • A blank field for the owner of the vulnerability, the time it took to correct, the next revision and countermeasures between the final solution

Armed with this basic list when performing a vulnerability assessment, the recommendations phase will reflect a complete understanding of the security posture in all the different aspects of the process. It will also deliver a better outcome for something that, in most cases, is a just a compliance tool.

The post A Step-By-Step Guide to Vulnerability Assessment appeared first on Security Intelligence.

Cybersecurity at the World Cup: What You Should Know

Events like the World Cup inspire awe about what teams working together and individuals with determination can accomplish — these events are a time for national pride, excitement and enjoyment.

Enhanced security at these events often focuses on physical security, with increased local police, physical barriers and identification checks. Yet, such measures should not overlook the need for heightened cybersecurity — not only because of the expanded digitization of sports venues but because the very attributes that make these events worthwhile open additional avenues for social engineering.

Malicious actors can prey on fans caught up in the emotion of a match or gain access to and release sensitive information at a moment when the effect would be most acute. Enhancing awareness, implementing preventive measures and eliminating the use of digital devices (where practical) would decrease the level of risk at international sporting competitions.

Three primary groups are particularly at risk during global sporting events:

  • Fans and game attendees, including foreign dignitaries and celebrities
  • Athletes participating in the games and those that support them
  • Sporting venues, including computer systems governing the entire event

Fans’ Information as a Target

The largest sporting events allocate more than one million tickets, judging from The New York Times coverage of a large sporting event in February 2018. Tickets for the World Cup in Russia this June have already exceeded 1.6 million, according to FIFA — underscoring the number of potential victims for cybercriminals, hacktivists and nation-state cyber actors.

Financially-motivated malicious actors are likely to see significant opportunity in targeting fans — particularly if they can exploit online ticket sales or transactions conducted in a nonsecure environment — while hacktivists and nation-state cyber actors are likely to seek access to information and websites that will be politically advantageous, either now or in the future.

Fans traveling internationally to attend high-profile sporting events are more likely to receive phishing attack messages — in fact, phishing-related spam increased by more than 40 percent during the World Cup in Germany in 2006, according to Comsec Group.

In these attacks, seemingly legitimate communications invite recipients to click on a link or file that will download and activate malicious software on their device. Cunning cyber actors are likely to exploit factors that can decrease vigilance to malicious messages, such as fans’ desires to congratulate and promote their teams or share their experiences on social media.

In addition to phishing attacks, fans can unknowingly expose themselves to malware by using nonsecure Wi-Fi, including open networks available in airports, hotels and restaurants. One such attack prompts users to update software on their mobile device, then installs malware onto the device instead. Nonsecure Wi-Fi can enable others to see any sensitive information sent over the network, including usernames and passwords, financial information and private documents.

Fans — and their family and friends back home — can also fall victim to the stranded traveler scam. In this attack, malicious actors hijack the email account of someone traveling overseas. With this privileged access, they can send targeted messages to friends and family members, claiming to be the traveler in desperate need of funds quickly.

Legislation and policies governing personal information and surveillance vary from country to country. Some national governments have cautioned their citizens, prior to past global sporting events, not to bring electronic devices or to clean their devices of any sensitive material and consider using a “burner” device to avoid surveillance from the host country.

For fans traveling to global sporting events, we recommend the following measures to enhance cybersecurity:

  • Be highly suspicious of messages containing links or attachments.
  • Avoid using public Wi-Fi. Use a private Wi-Fi network or virtual private network (VPN) that encrypts data to decrease some risk.
  • Warn family and friends against potential scams.
  • Be cautious of where and how you use a credit card for payment. If in doubt, use cash to avoid compromise of financial information.
  • Ensure any devices you bring have the latest operating system and applicable patches installed before you depart.
  • Consider bringing a “burner” phone in which you use a SIM card purchased at your destination with cash, and avoid bringing any additional electronics.
  • Avoid accessing social media or email.
  • Consider going “off the grid” while traveling, except for emergency communications.

Athletes Under Cyberattack

Athletes, sports clubs and sports agencies have become frequent victims of cyberattacks and information leaks over the past two years, as noted by The Telegraph. The upcoming World Cup would provide an ideal opportunity for cyber actors seeking to garner enhanced attention for their actions.

Hacktivists and nation-state backed actors seeking to tarnish the reputation of athletes, teams or their countries may find a worldwide sporting event an ideal setting in which to disclose derogatory information. Additionally, cybercriminals or malicious actors hired by an opposing team have an incentive to steal valuable information about game tactics or financial data to affect high-stakes games.

In the fall of 2016, a hacking group released confidential information about athletes acquired from databases on the World Anti-Doping Agency’s (WADA) networks, according to a public statement from WADA. The statement further explained that the attackers had used targeted phishing attacks against several WADA accounts, eventually gaining login credentials, allowing unauthorized access to the system. In April 2017, the International Association of Athletics Federations (IAAF) reported that the same group had hacked into its system, targeting information on athletes’ exemptions for drug use.

Athletes and those that support them also face potential threats from opposing teams, judging from past precedent. In 2015, personnel working for the St. Louis Cardinals, a U.S. baseball team, came under FBI investigation for allegedly hacking into sensitive networks belonging to a rival team, the Houston Astros, according to The New York Times.

Some teams are already implementing additional security measures to prepare for the World Cup this June. According to The Guardian, the Football Association will provide its own secure Wi-Fi for players and cautioned them about posting information that could reveal the team’s location, choice of players for the game or tactics.

Athletes and those that support them can follow similar practices to enhance cybersecurity during the games:

  • Employ a team chief information security officer (CISO).
  • Enhance awareness of potential attack vectors, including suspicious links or attachments in emails and prompts to update software systems.
  • Prohibit players from connecting to nonsecure Wi-Fi, and provide a separate, secure network for the team.
  • Harden any computer equipment the team may use by installing the latest versions of operating systems and patches, and disabling unused ports, unused accounts and file and printer sharing.
  • Limit players’, coaches’ and support personnel’s use of social media and email.
  • Consider asking players or support personnel to go “off the grid” immediately preceding and during major sporting events.

Venue Administration Vulnerabilities to Cyberattack Likely to Grow

As sporting event venues, scoring equipment and communication with journalists and fans become increasingly digitized, cyber risks related to event administration are likely to grow exponentially. Nation-state backed actors or hacktivists may seize the opportunity to compromise the integrity of networks controlling event venues, particularly when controversial political events dovetail with planned games. Cybercriminals and attackers hired by opposing teams may be motivated to fix a match by tampering with cameras used to assist referees, scoring systems or power grids supporting the games.

According to a report by the Center for Long-Term Cybersecurity at the University of California, Berkeley, the most common cyberthreats to sports venues currently include attacks against IT systems and ticket operations — but in the future may include devices that would affect the integrity of the game itself. Some concerning incidents at sporting events have already occurred, such as the cyberattack at the 2003 Pan American Games in the Dominican Republic that prevented scores from reaching journalists and fans, according to Security Affairs.

Industrial control systems, power grids and threats from Internet of Things (IoT) devices can further complicate cybersecurity for sporting event administrators, and an appropriate response is likely to involve close coordination with national cybersecurity units or even international organizations like Interpol. In March 2018, Interpol held a conference to discuss security at sporting venues, addressing topics such as IoT and appropriate risk management.

Distributed denial-of-service (DDoS) attacks are increasing in volume — particularly against IoT devices — doubling in a six-month time frame in mid-2017, according to a Corero report. IoT devices frequently lack appropriate security measures, such as updated firmware, firewalls or strong passwords during setup, with the potential to wreak havoc as a major sporting event is in full swing.

On May 23, 2018, Reuters reported how Ukraine raised alarms that a DDoS attack from malware on routers would interfere with the Union of European Football Associations (UEFA) Champions League soccer final in Ukraine later that week. Luckily, the warning appeared to inoculate the event from attack.

We recommend the following measures to sporting event administrators for enhancing cybersecurity:

  • Have a cybersecurity response team and a CISO.
  • Coordinate with national and international cybersecurity units to implement a collaborative approach.
  • Employ the services of cybersecurity vendors.
  • Be prepared for a large volume of attacks, and test response mechanisms to ensure they can handle the load.
  • Isolate systems from the internet (when possible).
  • Be wary of adopting new technologies for tasks central to the integrity of the game. Consider whether analog systems will be most appropriate for some functions.

From the high publicity surrounding global sporting events to the lucrative nature of exploiting expensive ticket transactions, malicious actors will have multiple reasons to target fans, athletes and venues at the World Cup this year. Potential victims can help decrease opportunities for attack by maintaining a higher level of vigilance, employing security best practices, such as updating software and patches, and being judicious about technology use, including opting out altogether when appropriate.

The post Cybersecurity at the World Cup: What You Should Know appeared first on Security Intelligence.

Three-Quarters of US Federal Agencies Face Cybersecurity Risk Challenges

Limited network visibility and a lack of standardized IT capabilities have led to an increase in cybersecurity risk across three-quarters of U.S. federal agencies, according to a new government report.

The U.S. Office of Management and Budget (OMB), in collaboration with the Department of Homeland Security (DHS), recently published the “Federal Cybersecurity Risk Determination Report and Action Plan” in response to a presidential executive order issued last year. The researchers used 76 metrics to assess the way federal agencies protect data. Of the 96 agencies analyzed in the report, the OMB classified 71 as “at risk” or “high risk.”

Cybersecurity Risk Assessment Reveals Persistent Challenges

Although 59 percent of agencies said they have processes in place to communicate cybersecurity risk issues, the report’s authors concluded that 38 percent of federal IT security incidents did not have an identified attack vector. In other words, those who encountered a data breach were not able determine how their defenses were penetrated. As a result, the OMB vowed to implement the Director of National Intelligence’s “Cyber Threat Framework” to improve its situational awareness.

Meanwhile, only 55 percent of agencies said they limit network access based on user roles, which opens up myriad cybersecurity risks, and just 57 percent review and track admin privileges.

Standardization can also help reduce risk in government applications. According to the report, a scant 49 percent of agencies have the ability to test and whitelist sofware running on their systems. The authors also suggested consolidating the disparate email systems used across agencies, since this is where phishing attacks are often aimed.

An Untenable Security Situation

The OMB cited a need to beef up network visibility and defenses. Its cybersecurity risk assessment revealed, for instance, that only 30 percent of agencies have processes in place to respond to an enterprisewide incident, and just 17 percent analyze the data about an incident after the fact.

“The current situation is untenable,” the report asserted. As a result, the authors noted that the DHS is working on a three-phase program to introduce tools and insights to solve security issues, which will begin later this year.

The post Three-Quarters of US Federal Agencies Face Cybersecurity Risk Challenges appeared first on Security Intelligence.

How security leaders can be empowered to drive results

The overwhelming demands on security leaders today can have a paralyzing effect. Gartner analysts provided guidance to security and risk leaders and practitioners on how to be empowered to adapt their people, processes and technologies to address the old and the new; to transform their approach to risk governance to be more continuous and inclusive; and to scale their security capabilities in other ways than by hiring more people. Much of this empowerment can come … More

The post How security leaders can be empowered to drive results appeared first on Help Net Security.

SecurityWeek RSS Feed: IBM Adds New Features to MaaS360 with Watson UEM Product

IBM announced on Monday that it has added two new important features to its “MaaS360 with Watson” unified endpoint management (UEM) solution.

UEM solutions allow enterprise IT teams to manage smartphones, tablets, laptops and IoT devices in their organization from a single management console.

read more



SecurityWeek RSS Feed

IBM Adds New Features to MaaS360 with Watson UEM Product

IBM announced on Monday that it has added two new important features to its “MaaS360 with Watson” unified endpoint management (UEM) solution.

UEM solutions allow enterprise IT teams to manage smartphones, tablets, laptops and IoT devices in their organization from a single management console.

read more

What Are the Different Types of Cyberthreat Intelligence?

Cyberthreat intelligence: It’s a growing business (and buzzword) that provides many market opportunities. Consuming threat intelligence data is valuable for organizations to improve their security posture and strengthen their protection, detection and response capabilities.

But there are some sharks in the water. Before you dive deeper into threat intelligence, explore the clear distinction between data and intelligence: Data is a value that is the result of a measurement or an observation. Intelligence, however, is the result of analyzing data and then disseminating it to the right audience.

If you talk to vendors who are trying to sell you threat intelligence information, make sure that they are referring to relevant cyberthreat intelligence — and not just a big pile of data.

The Different Types of Threat Intelligence

The use of intelligence isn’t something new. However, it’s not all about cyberthreat intelligence. Threat intelligence has been used throughout human history — and has been collected from several different sources.

  • Human intelligence (HUMINT): The most obvious type of intelligence, which is gathered from humans using interpersonal contact (directly or indirectly). It can also happen more covertly, via espionage or observation.
  • Signals intelligence (SIGINT): Gathers information via the interception of signals. These signals can be communication between people (COMINT), electronic intelligence (ELINT) or foreign instrumentation (FISINT), which is the interception of foreign electromagnetic emissions.
  • Open-source intelligence (OSINT): Collects information from publicly available sources. This data includes news, social media and public reports. Open-source intelligence, however, is not related to open-source software. The concept of OSINT has existed for years. Yet, the growth of instant communications and the capabilities for large-scale data correlations and data transformations have made it more valuable, especially for the computer security community. OSINT includes social media intelligence (SOCMINT), which is the collection of intelligence based on social media channels, conversations and signals.
  • Geospatial intelligence (GEOINT): Collects information from geospatial data, including GPS data and maps. This information can provide additional geographical contextual information on threats. Do not underestimate the possibilities of false flags and be prudent about using GEOINT information for geographical attribution.
  • Financial intelligence (FININT): Gathers information about the financial capabilities or motivation of the attackers. In the context of law enforcement, FININT is often used to detect suspicious financial transactions.
  • Tech intelligence (TECHINT): Gathers intelligence on equipment and material to assess the capabilities of the opponents. TECHINT allows you to update your protection measures to counter the use of this equipment or material.
  • Market intelligence (MARKINT): Collects intelligence to understand the market of a competitor or adversary.
  • Cyber intelligence (CYBINT): The collection of data via different intelligence-collection disciplines. In a lot of cases, CYBINT will collect data from SIGINT, OSINT and ELINT. This data will also occasionally come from SOCMINT, HUMINT, GEOINT and other intelligence disciplines.

Read the 2018 IBM X-Force Threat Intelligence Index

Start With a Cyberthreat Intelligence Program

Cyberthreat intelligence feeds the detection, prevention and response processes within your computer security program. It is complementary to the incident response (IR) process and helps in reducing the organizational risk. It will support your security operations center (SOC) and provide the necessary input to fulfill requests for information (RFIs) from your management board, directors or other departments.

It also provides the essential context data to prioritize the most critical attacks and continuously update your protection measures. It’s the information that allows you to detect incidents earlier and investigate them to understand the scope — and, possibly, the intentions of the attackers.

Here are three questions to ask before starting your program:

  1. Is there room in the budget? This might sound like a no-brainer, but it’s easily forgotten. A cyberthreat intelligence program will almost always be a cost center. You can measure its performance, but (unless you’re in the business of selling the threat data) it’s not going to generate additional revenue. Don’t forget that besides the cost of the initial startup of the program, capital expenditure (CAPEX), you will need to budget for the operational expense (OPEX). Tooling, subscriptions and the like will not be the biggest chunk of the budget, however. The center of a strong program is personnel.
  2. Are the essential IT processes already developed? It doesn’t make sense to spend time on providing threat intelligence information to other IT departments if they are not able to act on the information. Having intelligence without a follow-up action is about as valuable as not having intelligence at all. Being able to increase protection measures quickly, evaluate vulnerabilities and apply the relevant patches — or search for signs of an intrusion — are just some of the processes that need to be already in place.
  3. Is there access to system, network or application data? A lot of the data that is needed to verify threat intelligence information already resides in your network. Data from firewalls, proxy servers, domain name system (DNS) logs, intrusion prevention and detection events, application logs, antivirus systems and other security controls give you valuable information about what’s going on inside your network. Focusing on the outside threat feeds and threat data — and then not being able to validate this with internal information — is not efficient and will probably cause frustration.

Every cyberthreat intelligence program should include both operational and strategic components. A robust operational component will give you the ability to identify incidents; contribute to the investigation of incidents; and tune the protection and detection controls. A strong strategic component will help you build relationships with other communities and organizations, including information sharing and analysis centers (ISACs); other threat-sharing communities; and vendors and providers of restricted information sources (i.e., sources that provide non-public information for your specific equipment or sector).

The strategic component will identify new trends, evolving threats, emerging technologies and new standards. It will also provide you the necessary information to be able to do adversary attribution, identify attack campaigns and understand the attacker tools. It also will offer architecture recommendations toward your IT department.

Build Your Team

There’s a chicken-and-egg problem: You need a team to run the tools and gather the data. You need tools and data to support your team.

Good threat intelligence analysts can overcome this problem by starting with only a few sources, automating the process and then expanding the number of sources. Start with building the team, which will not happen overnight. In most cases, the team will grow organically. Some teams will not have full-time members — and they may only be able to spend part of their time on threat intelligence.

Find people with different backgrounds, preferably with demonstrated skills in security operations and analytic mindsets. Technical expertise relevant to your equipment and some hands-on experience is essential. Your team members will need to be able to talk to different audiences and write concise, understandable reports. Executive communication skills and excellent writing skills will be necessary.

Find Your Data Sources

Identify the data sources that define your threat landscape; document how these sources will be used; and assign roles and responsibilities within the team for collecting, assessing and distributing the information. Your organization’s internal information can be one of the most valuable threat data feeds to analyze (via threat hunting).

After all, the best source of intelligence is still your own data. Identify a limited set of sources for which you get regular, complete and valuable data and that are most useful for your organization. DNS logs, proxy logs and endpoint anti-malware event data can comprise a treasure cave for information, for example.

Searching for anomalies without a starting point will be difficult. You need to be able to gather malicious domain names, file hashes and other indicators of compromise. You can receive this data by consuming the information that comes from threat intelligence sharing platforms or by actively participating in threat-sharing groups. You can then use the collected information to identify attacks targeting your network quickly. Additionally, this information will help with composing internal threat information reports.

Measure Your Success

When you start your program, you have to define the stakeholders and goals. There should be a good understanding of reports: What is the frequency of the reports? Who receives them? Who should act on them? Who will provide feedback and input?

Measuring success is difficult without describing key performance indicators (KPIs). Make sure these are relevant to your organization and your team. How many intelligence reports has your team produced? What was the feedback from intelligence consumers? Make sure your intelligence reports include a feedback cycle so you can measure the satisfaction of your stakeholders.

Don’t be afraid also to include some easy-win metrics. You can list the number of indicators seen in your network or the number of attacks stopped because of an update of protection measures based on threat data. Of course, metrics can be dependent on the expectations of your stakeholders.

You can also measure the success of the program by looking at the maturity. The lowest level of maturity is a team without a plan and no time reserved to spend on threat intelligence. Increased maturity is having a small number of IT staff spending a limited amount of time per week on threat intelligence. Maturity can then further increase by having more staff spending more time on threat intelligence. A team with medium-level maturity will have dedicated staff members for threat intelligence, whereas a mature team has different dedicated staff members and a team leader for threat intelligence.

Five Helpful Tips for Your Cyberthreat Intelligence Program

  1. Understand your business or sector. Threat intelligence that isn’t relevant to your business, sector or environment is going to drain your resources without providing lots of valuable return.
  2. Define your focus and priorities at the beginning of the program. Covering everything is impossible. Don’t get buried by the information. There is always more information to gather — and you cannot simply consume it all.
  3. Remember that a threat intelligence program is an ongoing (and repeating) process. Be prepared to include feedback loops and ensure service improvements.
  4. Prepare to automate things. If you only rely on the manual processing and dissemination of information, then your cyberthreat intelligence program will not grow easily. Your ability to ingest data and act upon it in an automated fashion will greatly increase the success of the program.
  5. Put a basic data classification process in place. This will enable you to consult other departments if you are allowed to share information outside your organization. Implementing something like traffic light protocol, which is explained in detail by the Computer Emergency Readiness Team, can ensure that sensitive information is only shared with the appropriate audience.

Starting with a cyberthreat intelligence program isn’t hard if you make the time to plan. Make sure you hook up to an existing threat intelligence sharing community and learn from their experience when starting your own program.

Read the 2018 IBM X-Force Threat Intelligence Index Now

A strategic component that will help you build relationships with other communities and organizations, such as

Information sharing and analysis centers (ISACs)

Other threat-sharing communities

Vendors and providers of restricted information sources (i.e., sources that provide non-public information for your specific equipment or sector).

A strategic component that will help you build relationships with other communities and organizations, such as

Information sharing and analysis centers (ISACs)

Other threat-sharing communities

Vendors and providers of restricted information sources (i.e., sources that provide non-public information for your specific equipment or sector).

https://www.ibm.com/security/cyber-threat-hunting
the frequency of the reports, who receives them, who should act on them, and who will provide feedback and input.

The post What Are the Different Types of Cyberthreat Intelligence? appeared first on Security Intelligence.

SecurityWeek RSS Feed: Considerations For Evaluating Vendor Risk Management Solutions

The Vendor Risk Management (VRM) space has quickly become a hot topic this year.  It seems like everywhere you turn, new companies offering VRM solutions are popping up.  As we’ve seen with other markets in security, most vendors in the space use the same marketing buzzwords.  Each vendor seems to claim that it provides all of the same features and capabilities as the next vendor.

read more



SecurityWeek RSS Feed

Considerations For Evaluating Vendor Risk Management Solutions

The Vendor Risk Management (VRM) space has quickly become a hot topic this year.  It seems like everywhere you turn, new companies offering VRM solutions are popping up.  As we’ve seen with other markets in security, most vendors in the space use the same marketing buzzwords.  Each vendor seems to claim that it provides all of the same features and capabilities as the next vendor.

read more

The Digital Disaster: A CIO Embraces Cyber Resilience

The following story illustrates the challenges a chief information officer (CIO) might encounter when building a cyber resilience and response plan. While Martin Kinsley is fictitious, the nightmare scenario he experiences — rapidly spreading malware and data loss — is all too real for many organizations. Companies often believe their business-critical data is safely backed up — only to be met with permanent data loss. Read on to see what challenges Martin faces in his cyber resilience efforts and discover what choices he makes in response. What would you have done differently?

“So many people rely on us to get where they need to go,” said regional airline CIO Martin Kinsley to his team of IT leaders. The meeting was focused on cyberattack prevention, but he never missed an opportunity to discuss customer service. As he wrapped up the Friday afternoon session, he took care to emphasize the airline’s people-first values.

“Any mistake can affect people on a deeply personal level,” Martin said. “Missing flights means missing business meetings, birthdays, weddings — those are moments our passengers can never get back.” He felt proud as he wrapped up the meeting and returned to his office, as he’d worked tirelessly to convince the rest of the leadership team that security and cyber resilience needed to be a priority. While he knew the business continuity plan was a work in progress, the airline’s customer satisfaction scores had never been higher.

The IT team didn’t always receive the credit it deserved from headquarters leadership, but Martin was aware that the success of the team’s client-facing systems and infrastructure helped the airline maintain its multicity contract with a major air carrier.

No Rest for Weary Security Leaders

As Martin worked through his outstanding emails at the end of the week, he thought about how he had earned the short getaway that awaited him. With just a few tasks standing between him and a three-day vacation, he was ready for the break.

Just as Martin prepared to close his inbox, a new message came in from the airline’s security information and event management (SIEM) solution. He read the subject line — “High Alert: Network Security Incident” — and quickly realized the message was serious.

Most of Martin’s team was still in the office, so he asked them to assemble in the conference room immediately. He tasked them with investigating the notification and tracking down the cause of the security incident. Help desk calls began to roll in at the same time, and soon the team had the answer: Malware had infected hundreds of airport terminals.

Martin expelled a heavy sigh.

The help desk advised customer service agents to power down the terminals, but it was too late. Every endpoint was already infected and encrypted.

Cyber Resilience in the Face of Chaos

The malware had spread rapidly across his airline’s remote agent and passenger terminals over the past few hours. The infected terminals were now essentially bricks. Help desk employees were fielding calls from frantic airport employees complaining about angry passengers. To make matters worse, remediation attempts had failed since the malware’s encryption was airtight.

Not only was the malware spreading like wildfire, but the damage it inflicted was also focused on the largest airport in their region — a major international hub. Thousands of passengers at that airport were effectively grounded on the busiest weekday for airlines (and they were definitely unhappy about it).

All six of their airline terminals shared a network, which was segregated from headquarters’ networks. Forensics would come later, but Martin was fairly certain whatever strain of malware they were dealing with had the escalated account privileges necessary to spread damage to every one of the airline’s terminals.

Martin reached in his pocket to text Marina Petrov, the airline’s CEO, who was already fielding calls from his office with airport supervisory staff about emergency policies for granting vouchers and hotel rooms to angry travelers on the ground. He quickly typed to Marina that he was afraid the incident was getting even worse.

The team’s frantic efforts to contain the malware had failed, and they were now in full-on remediation mode.

By late Saturday morning, Martin’s prediction had come true. The malware had reached every endpoint on the airline’s terminal network — executing malicious code at all of their regional airports. The team’s frantic efforts to contain the malware had failed, and they were now in full-on remediation mode.

Marina stood in the door of the conference room where the IT team had created an impromptu security operations center (SOC). Martin informed her that — since all of the terminals had been infected and encrypted — his team had no choice but to start from ground zero, which meant establishing backups.

Failed Backups Lead to Business Continuity Disaster

“You’re saying the last usable backup we have is six months old?” Martin asked frantically. When system admin Fei Zhou nodded, Martin felt faint. His long-awaited weekend getaway had been replaced by the worst weekend he could have imagined. In fact, it was now early Saturday afternoon, and it had been nearly 30 hours since he’d seen his bed or taken a shower.

“The network-attached storage has been running idle since before I was hired — and no data’s been backed up,” said Fei.

After looking up and down through the directory and backup logs, Fei discovered the backup had stopped working the same week her predecessor left the company. She also saw the admin credentials had changed for the centralized network management tool.

Martin bit back the urge to ask why Fei hadn’t bothered to test backups (or do any other kind of digging) during her nearly five months at the organization. It was a definite failure on her part, but the current situation wasn’t any one person’s fault. It was a series of failures caused by everyone on the IT team.

Martin winced when he realized his worst-case fear of permanent data loss was achieved. He discovered the latest release of the reservation software was issued three months ago. So, his team could restore the backups — but they’d need to perform manual updates. The manual update process was long and grueling, but Martin and his team maintained their composure as they worked together. All the while, Martin scolded himself for not checking the backups himself.

Lasting Financial and Reputational Repercussions

“Too little, too late,” Marina said. Her words echoed in Martin’s ears after a Tuesday morning meeting with the leadership team. Martin’s team had demonstrated heroic behavior over the weekend, working tirelessly to restore backups to each of the terminals and manually update the reservation software. It was a painstaking process, but their hours of work paled next to the IT failures that had caused the issue in the first place.

The airline’s operations were just beginning to return to normal four days after the malware hit. News reports were scathing — and the chief financial officer (CFO)’s tentative projections of just how much the incident would cost were beyond grim. Union negotiations around pilot overtime revealed staffing costs into the millions. This number didn’t even begin to cover the costs of accommodating travelers over the weekend or the reputational damage the airline had suffered.

The average cost of a data breach is well over $3.62 million.

Martin knew the media would eventually forget the information technology incident, but he couldn’t say the same for the airline’s customers. Would they ever trust the company to get them where they needed to be again? Marina and the CFO had also alluded to rumors of heavy federal fines and a loss of contracts.

While the average cost of a data breach is well over $3.62 million, Martin knew this disaster was going to be far above average, even without the leak of data. He was certain the next chapter would include better information security safeguards and regular backup testing — but Martin had few other certainties about the airline’s future.

Embracing Proactive Cyber Resilience

Rapidly spreading malware that causes permanent data loss is all too common in the real world. In the past year, countless high-profile organizations have experienced long-lasting repercussions as a result of ransomware and malware spreading through their networks.

As Martin realized too late, his experience was the product of countless technical and human failures across the IT department. Although tasks were left undone for months on end, it wasn’t because his IT team wasn’t putting in hours or effort. Martin wanted to lead his organization toward a state of cyber resilience, but he lacked the expertise and resources to create an end-to-end strategy.

To avoid an expensive disaster, security leaders like Martin should consider onboarding resilience consulting services to design a business continuity plan and establish a central incident management hub instead of relying solely on a series of SIEM and network monitoring applications.

A resilience orchestration tool can help security teams automatically contain and respond to an incident across complex network structures.

A resilience orchestration tool can help security teams automatically contain and respond to an incident across complex network structures. And with the help of incident response experts, CIOs like Martin can also contain malware in the event of a breach and ensure that the security operations team does all the right things.

In addition, security leaders can invest in automated, cloud-based backup services to protect sensitive data and implement disaster recovery-as-a-service (DRaaS) tools to prevent a lasting IT outage.

A worst-case scenario can become a reality at any time — but it doesn’t need to result in regulatory repercussions or long-term damage to an organization’s reputation. With the systems and processes for proactive security response, CIOs can achieve confidence in their cyber resilience plans and remediation ecosystem.

Read more: Practice — The Best Defense for Responding to Cyber Incidents

The post The Digital Disaster: A CIO Embraces Cyber Resilience appeared first on Security Intelligence.

The Compliance Crisis: A Compliance Officer Faces an Outdated Risk Management Framework

The following story illustrates what can happen when a compliance officer is confronted with an outdated or incomplete risk management framework. While Frank Roth is fictitious, many real-world organizations face pressing security and compliance issues due to their failure to regularly update policies and procedures. Read on to learn about Frank’s challenges and the choices he makes to overcome them. How would you have responded in his place?

It was Frank Roth’s first day on the job as a risk and compliance officer at a utility company. While Frank had decades of experience creating risk management frameworks for highly regulated industries, joining the utility company was a bold career move and an important promotion for him. After a busy morning of filling out paperwork and touring the headquarters, it was time to assess the organization’s risk status.

Frank expected the utility company already had some solid policies in place so he could jump right in preparing for the upcoming audit he’d learned about during his final interview. He grabbed a binder labeled “Compliance” from the bookcase in his office.

Frank flipped to the last page and couldn’t believe the listed date: 2016. That can’t be right, he thought. He imagined there must be some missing data because the last documentation added to the compliance notebook was two years old.

His heart pounding at this alarming discovery, Frank emailed chief information officer (CIO) Shondra Washington to schedule a meeting.

Missing Data Reaches Epidemic Proportions

“Well, you know I started just two weeks before you did,” Shondra informed Frank.

Frank didn’t know that — but he did now. Since it was now his third day on the job, it must only be Shondra’s 17th. Frank realized there was indeed no chance Shondra had compliance documents that hadn’t been added to his notebook.

“I’m sorry to hear your compliance notebook’s so far out of date. Frankly speaking, my experience hasn’t been all that different,” Shondra said as she tapped a pen against her desk. “As a matter of fact, I discovered the IT team’s master inventory list hadn’t been updated in nine months.”

While Shondra went on to discuss the epidemic levels of shadow IT she was trying to harness, Frank began to panic. The company’s next audit date was approaching quickly. At a loss, he struggled to summon his usual dry humor.
“Well, meeting over a thousand specific compliance requirements and identifying risks will require knowing what’s on the network,” he said.

Information Labeling and Handling Ceased to Exist

Frank and Shondra’s meeting was scheduled for an hour but ended up lasting the majority of the afternoon. Frank learned when it came to information labeling and handling, things were even worse. How was this possible?

Shondra told him the most recent information labeling and handling policy, which defined information labeling updates as the CIO’s purview, was 18 months old. Frank knew the utility company had brought new assets onto the network in an aggressive expansion into renewable energy sources — and it had undoubtedly acquired new customers.

He couldn’t even begin to wrap his head around the amount of data assets that were unlabeled and unaddressed in the access policy.

He knew the organization was swimming in data. He couldn’t even begin to wrap his head around the amount of data assets that were unlabeled and unaddressed in the access policy.

Identity and Access Management Mystery

Shondra assured Frank she was working hard to create an updated inventory and get her hands around information labeling. However, she was hesitant to provide a solid timeline on either project. Frank glanced down at his notes and noticed an item labeled “privacy impact assessments (PIAs),” which he knew was an analysis of how information is handled. He asked Shondra about the state of identity governance.

Shondra had an uneasy expression, so Frank continued: “User access controls should be able to determine what users were added and when — who left the company and whether their user IDs were revoked. I also need to demonstrate which IT administrators have access to critical systems.”

“Well, I wouldn’t really describe the current state as identity governance,” Shondra said. “More like ad-hoc user access chaos. I kicked off an identity governance audit my second day on-site, but it’s not going to be done for a few weeks.”

Frank knew he and Shondra had both taken new roles hoping for the best — and had ultimately stepped into an ordeal of mismanaged regulatory requirements and processes. Unfortunately, he wasn’t sure how to manage risks when IT leadership was struggling to maintain the status quo.

Digital Transformation Disaster

Shondra worked tirelessly over the next week to bring the IT department up to par. Frank faced an internal compliance and risk management framework that was years out of date, but he did his best to fill in the gaps where he could. The overwhelmed new hires discussed recent app releases over lunch.

Shondra mentioned that the last CIO had focused on cost savings and customer satisfaction. As a result, the customer portal and energy efficiency apps were pushed through DevOps without dedicated time for security testing.

Frank felt his blood pressure spiking as Shondra detailed how the CIO’s “digital transformation” plan included a third-party development agency and unreasonable development timelines. Worst of all: It relied heavily on business users for feature-based acceptance testing.

“So, you’re telling me both customer apps and employee apps could be full of vulnerabilities?” Frank asked.

Shondra nodded slowly. “You know, it’s way more expensive to fix these bugs post-release than just do secure DevOps in the first place,” she said. “I wish we knew the extent of the vulnerabilities, but I have to direct more resources towards actual testing. From what I hear, the requirements kept changing and projects ran over budget, so the last CIO pushed the development agency to do even less testing than usual.”

Frank had no idea how they’d pass upcoming audits — let alone stay ahead of complex regulatory mandates.

Frank had hoped he’d find well-documented, updated risk management procedures on his first day. Instead, he was completely uncertain whether the company’s business dealings were even ethically sound. Furthermore, he had no idea how they’d pass upcoming audits — let alone stay ahead of complex regulatory mandates.

Risk Management Framework From the Ground Up

Both newly hired and seasoned compliance and risk management professionals often struggle to develop a proactive stance on business risk management. According to one study, up to 89 percent of organizations didn’t fully understand General Data Protection Regulation (GDPR) requirements six months ahead of the deadline for compliance.

Fortunately, Frank isn’t destined to face a failed compliance audit or to call his former employer to beg for his old job back. Today’s compliance climate is complex and costly, but the right solutions can help leaders reduce risk and stay ahead of regulations — even if they’re dealing with serious compliance fatigue.

Frank could implement an effective risk management framework to help combat the issues he’s facing. His first step might be to identify all network endpoints, as well as both authorized and shadow software, in seconds using an automated endpoint detection solution. He could then apply policy-based compliance to both endpoints and cloud services with a security intelligence platform — making his job a whole lot easier.

By leveraging comprehensive identity and access tools, Frank could bridge the gap between messy patchwork approaches and unified, compliant management for user governance. A two year-outdated compliance notebook is definitely stressful, but an ecosystem of data security and protection solutions would automate the overwhelming task of identifying data records that are subject to industry-based regulations.

By leveraging comprehensive identity and access tools, Frank could bridge the gap between messy patchwork approaches and unified, compliant management for user governance.

Rather than give in to his despair, Frank should start by developing a plan based on outdated standard operating procedures using an incident response solution. He could make security tools do double duty, using packs of pre-built and customizable reports for compliance reporting. Finally, he could use a security app exchange to implement built-in compliance reporting and collaborate with incident response experts to develop a truly resilient plan to mitigate risks.

Risk management and compliance is far from simple — especially for individuals like Frank who are struggling to reconcile complex regulatory requirements with outdated operating procedures and scrambling to manually assess organizational risk.
By leveraging automated solutions to create a comprehensive ecosystem of support for risk management, compliance and business resiliency, security leaders can get a better handle on the organization’s security and compliance posture and prepare for the future. Compliance requirements and networks are changing quickly but developing total oversight and change management can instill confidence in overwhelmed security professionals.

 

Read more: As Cyber Risk Escalates, the C-Suite Must Take Action

The post The Compliance Crisis: A Compliance Officer Faces an Outdated Risk Management Framework appeared first on Security Intelligence.

The Modernization Misstep: A CEO Takes on Digital Transformation

The following story illustrates what can occur when efforts at digital transformation go wrong. Kelly Zheng may not be real, but the challenges she’s confronted with are far from fictitious. Many organizations and industries struggle with concerns about retaining customers in a disruptive and competitive landscape. Facing a “transform or else” paradigm isn’t easy, but it’s increasingly common. Read on to discover the challenges and choices Kelly faces. Did she choose the correct path?

Insurance company CEO Kelly Zheng knew she wasn’t alone in thinking her industry was one of the most disrupted by technology and innovation. However, she always brought her positive (and practical) attitude to the office.

Like many of her fellow CEOs, she juggled a plethora of changing priorities. Her number one concern lately? The goal of practically every industry: Customer retention. Fortunately, Kelly worked alongside a talented team of C-level executives.

After hearing the chief financial officer (CFO) report on the company’s declining revenue and a suspected spike in fraudulent claims, Kelly was worried about the firm’s digital transformation strategy — or lack thereof.

Kelly stared hard at the net promoter score (NPS) chart the chief marketing officer (CMO) had presented, searching for answers in the negative trend line. The dismal data wasn’t the only bad news she’d received that day. After hearing the chief financial officer (CFO) report on the company’s declining revenue and a suspected spike in fraudulent claims, Kelly was worried about the firm’s digital transformation strategy — or lack thereof.

She masked her concern during the CMO’s presentation but revealed her true feelings when the CFO knocked on the door to her office later that day. Kelly knew she needed to act fast and get her leadership team together to find a solution.

“Every company is a technology company in today’s world,” Kelly stressed. “We need to get with the times and offer an omnichannel customer experience. A mobile app is a perfect opportunity to embrace disruption and bring our company to the next level.”

Later, Kelly sounded confident while she outlined her plan to the leadership team: The organization would invest immediately in developing a mobile app. Internally, however, she couldn’t help but wonder if the team could handle a significant digital overhaul against a ticking clock.

Designing a Secure, Frictionless Customer Experience

Kelly knew a mobile app would help the organization stay in touch with its customers, which would ultimately improve customer satisfaction and loyalty. By the time the leadership meeting was over, she had outlined a tentative plan of action to get the mobile app off the ground.

Although the organization’s chief information officer (CIO), Ned Lui, was part of the leadership meeting, Kelly wasn’t able to connect with him until a few days later due to his hectic schedule. She wanted to discuss the app’s possible impact on the company’s current IT infrastructure and operations, but the conversation quickly turned to security risks.

“You should meet with Adela, the chief information security officer,” Ned said. “She will make sure we address app security properly.”

While Kelly was concerned about the mobile app’s security, she needed to get the business requirements for the application and the third-party development team agreement solidified first. She had already asked her design team to take an active role in designing an industry-leading user interface (UI).

Between the world-class user experience (UX), experts at the development agency and her in-house talent, Kelly felt certain her organization was taking the right approach to developing a mobile, omnichannel customer experience — a people-first approach.

Balancing Security and Ease of Use

Kelly recognized that security would be an important concern during the development process, so she kept it top of mind. She highlighted the importance of security in her weekly meetings with the third-party agency she hired to develop the app. She understood there were significant functionality and cost-saving reasons to build the new app with security from the start. However, she wasn’t entirely confident the app agency had the right mindset when it came to balancing security with UX.

Kelly armed herself with app security research and addressed security at every meeting with the development agency.

Kelly armed herself with app security research and addressed security at every meeting with the development agency. While ease of use was critically important, she grilled the agency project manager to make sure the development team wasn’t sacrificing security for convenience.

Kelly was satisfied with the agency’s practice of secure DevOps, and she kept the rest of the leadership team updated on the progress.

Tackling Fraud Head-On

With development efforts in full swing, Kelly shifted her focus to addressing the costly problem of rising fraudulent claims. She was hopeful that the app would create a flood of new customer accounts, but she was also aware that it could make it easier than ever for customers to file fraudulent claims.

Kelly tasked Ned and Adela with developing a plan to authenticate new user accounts. However, when the task force reconvened, Kelly felt overwhelmed by Adela’s recommendation to explore new solutions.

“Legacy approaches to user authentication and identity verification are clunky and, quite frankly, high-risk,” Adela argued. “We can’t rely on passwords. Instead, we need a dynamic approach to verifying users, devices, environments, behavior and activity.”

They weren’t sure how to integrate multifactor authentication (MFA) when development was in full swing.

Everyone knew Adela was almost certainly right. However, they weren’t sure how to integrate multifactor authentication (MFA) when development was in full swing.

After much discussion, Kelly convinced her colleagues they’d have to stick with a framework-based approach to fraud prevention. Context-based authentication tools would have to wait for the next release.

Unleashing a Mobile-Enabled Workforce

As the go-live date approached, Kelly focused on a final puzzle piece: the insurance organization’s newly mobile-powered remote workforce of insurance agents. Mobile app access for agents was necessary to deliver on the promise of real-time updates.

There was, however, an issue of risk. Could the health of the agent’s personal mobile devices compromise the IT infrastructure or (worse) customer data? What if a device was lost or stolen? While the organization provided laptops to their agents, Kelly worried what was next because there simply wasn’t enough budget available to equip the agents with company-owned mobile devices.

Kelly and Ned opted for the best option they felt they had: an updated bring your own device (BYOD) policy. With the help of the human resources team, they decided to invest in a new, written policy that clearly outlined the agent’s responsibility to protect customer and company data on mobile devices. The new BYOD policy was clear about secure behaviors — such as avoiding sketchy Wi-Fi connections and the importance of putting a lock on each mobile device — but didn’t outline what would happen to people who failed to comply.

Achieving Digital Transformation Without Sacrificing Security

Kelly is far from alone when it comes to balancing the pressures of digital transformation and security. Faced with a fast-ticking clock, she didn’t feel that she had the option to focus on security and still release a great product on time. However, there’s an alternate ending to this story that doesn’t involve a vulnerability-riddled app, fraud or mobile data breaches.

Disaster recovery-as-a-service (DRaaS) and backup-as-a-service (BaaS) could have helped her team meet resiliency challenges.

To avoid these modernization missteps, Kelly could have invested in security services to help her task force develop security-focused business requirements and create a comprehensive DevOps framework. For example, disaster recovery-as-a-service (DRaaS) and backup-as-a-service (BaaS) could have helped her team meet resiliency challenges.

Kelly’s developers also could have automated ongoing risk testing in production with a vulnerability scanning tool to avoid the high cost of discovering security risks after the app went live. In addition, an identity and access management (IAM) solution could have helped the development agency protect authentication between mobile and web apps.

Had the app passed a penetration test, Kelly could have approached the go-live date with confidence instead of apprehension. She also could’ve nipped the threat of new account fraud in the bud by investing in a fraud protection solution that examines users, device health and sessions.

Finally, Kelly could have reconciled risk with mobile agents by leveraging a cognitive-enabled unified endpoint management (UEM) soluion. That way, everyone would have won: The agents would’ve been able to keep their phones and game apps, and Kelly’s organization wouldn’t have had to purchase mobile devices for its employees.

Organizations can achieve security by design instead of taking an after-the-fact approach to data protection.

Digital transformation may be increasingly inevitable in many sectors, but you’re not doomed to face disruption or mounting security risks when delivering new mobile experiences or turning around a digital overhaul before your competitors go live with their apps. With expert assistance and augmented intelligence, organizations can achieve security by design instead of taking an after-the-fact approach to data protection.

Read more: Mitigate Your Business Risk Strategically With Cognitive Application Security Testing​

The post The Modernization Misstep: A CEO Takes on Digital Transformation appeared first on Security Intelligence.

SecurityWeek RSS Feed: Open Source Tool From FireEye Helps Detect Malicious Logins

FireEye has released GeoLogonalyzer, an open source tool that can help organizations detect malicious logins based on geolocation and other data.

Many organizations need to allow their employees to connect to enterprise systems from anywhere in the world. However, threat actors often rely on stolen credentials to access a targeted company’s systems.

read more



SecurityWeek RSS Feed

What Are the Consequences of Neglecting User Security Training?

Are your user security training efforts working? You may have never paused to think about the relationship your users have with your security program. The goal, of course, is to train your users to understand what to watch out for and what to do in a number of tricky situations.

The reality: At most organizations, users often have much more control than they realize. Users continually make security decisions throughout their working days that can lead to impactful security situations on your network.

They may have signed off on your security policies during their initial onboarding — but do they fully understand what’s expected of them? And are you prepared to deal with the consequences?

User Security Training: A Missed Opportunity?

A substantial portion of security events are brought about by untrained (or improperly trained) users. A security training program can help minimize the risk of these incidents. A plan also demonstrates due care. You shouldn’t just slap together an awareness and training program and hope for the best. Take a measured approach: Align your initiatives with your IT and security programs and overall business risks. Then, measure your efforts toward potential improvements.

Do you already have a security awareness and training program in place — but you’re still getting hit? If so, then something’s amiss. Data, such as the 2017 Cost of a Data Breach Study and Privacy Rights Clearinghouse’s Chronology of Data Breaches, reveals many examples of people going through the motions without checking in to see how things are really working.

A solid awareness and training program revolves around setting expectations. Your users have access to information on a need-to-know basis — and they need to know what they should and shouldn’t do. Your human relations team, IT and security staff, executive managers or other involved personnel also need to know what’s going down and how it matches up with the goals of the business. Otherwise, it’s just noise.

User ignorance isn’t the only thing that creates security challenges. All it takes is boredom, curiosity or trouble at home for someone to do something terrible. Furthermore, honest slip-ups can also bring about pain and suffering. But what can you do reduce the impact of these situations? You can’t realistically expect perfection from your users.

Teach Users to Focus, Not Follow

Set your users and business up for success by having compensating controls that can be there for when these things crop up. It’s hard to believe that most companies have yet to deploy a proper patch management system and strong endpoint security controls — but that’s the reality in enterprise today.

A user awareness and training program is not going to prevent all threats, but at least you can show that you were taking the proper steps to address common issues. However, it’s not just about security awareness — it’s also about situational awareness. In other words, it’s not simply people being asked to follow policies blindly. Instead, it’s people being able to think for themselves when presented with trying situations.

A Critical Intersection

On a daily basis, I witness right-turning drivers yielding to vehicles turning left in front of them at an intersection near my home. The right-turning drivers don’t have a yield sign. Still, they yield anyway.

It’s a two-pronged issue:

  • These drivers have been conditioned to believe that if they are turning right (and someone is turning in front of them), they should yield. After all, a yield sign is present in many such intersections.
  • These drivers are not thinking. They’re simply going through the motions without paying attention to what’s around them.

It’s not only an aggravating situation when you’re behind these drivers who yield in front of you, but it can be downright dangerous for all cars involved. Your users need to be conditioned to do what’s right — but they also need to be paying attention and thinking about their actions. Take this approach to your awareness and training and you can shape user behaviors in positive ways.

The Real Problems at Hand

History has taught us that intentions do not equal results. You cannot take the path of politicians and continue to avoid the real problems at hand. As Brendon Burchard, a performance coach and author, once said on Twitter, “Avoidance is the best short-term strategy to escape conflict and the best long-term strategy to ensure suffering.”

Identify the things in your environment and business culture that are facilitating these user-centric security challenges. The most important question: How? How are users creating IT risks? How are you setting them up for success in addressing those risks? How are we still vulnerable? How can we get to the next level?

If you don’t have a formal security training program in place, get started with some simple steps: emails, lunch-and-learn events, posters and other reminders around the office. If you do have a program in place, but you’re still getting hit with malware infections and failed phishing checks, investigate the cause. Where are the gaps? What are the opportunities for improvement?

There’s always room for improvement in these areas, and it’s up to you to figure out where. Unless (and until) you ratchet up your user awareness and training efforts, they will continue to serve as mere background noise. Sure, it’s not the be-all and end-all solution to your security woes, but such efforts are a vital component. Make sure you’re doing it well.

Listen to the podcast: What’s the Best Defense Against Cyberattacks? You Are

The post What Are the Consequences of Neglecting User Security Training? appeared first on Security Intelligence.

The answer’s in the question: risk assurance that’s ready-made for a regulator

Regulators and auditors expect us to have all kinds of controls in place to manage information security. Standards like ISO 27001 or frameworks like the CIS Controls are helpful guides for applying these controls, but they don’t ask questions in the same way a regulator or auditor would.

Many organisations don’t catalogue the software and controls that protect the good guys and keep the bad guys out. That led me to develop a risk assurance framework to address this gap. To build the framework, I decided to apply the ‘questioning’ approach in some of my previous operational risk roles in regulated organisations.

What is it?

The risk assurance framework defines the security controls we have (or should have) in place across the organisation. It combines asset registers with process descriptions in place to counter risk. It explains how the organisation knows these controls are working, how it verifies that, and what it does next if a control stops working or flags up an alert.

How to create the framework

Standards tend to be deliberately technology agnostic. In the risk assurance framework, we can be much clearer about the tools we have in place: “we use this brand of firewall. This system sends alerts to [a named person/team/email address] [in real time/every day/every week], and here’s how we act on those alerts. These alerts are read [as soon as they arrive/on a daily/weekly basis]. This is how we get oversight to know that the firewall works, and here’s what we do when it doesn’t.”

You can also document where the alerts and other relevant output is stored [the artefacts or evidence which auditors look for], including any periodic reviews, such as formal discussions or meetings [agenda and minutes] with your support teams or providers.

In short, the framework explains how you manage the control. You then repeat the process for each security control in the organisation. It’s all there, in one place, in a format that an auditor wants to see it, in a language they understand. It means you’ve thought about security and risk from their perspective.

Why it’s useful

Asking questions is a great way to define how an organisation currently handles security, because the answers may point the way to where it wants to go. The process of writing down ‘here’s what I have’ leads directly to asking: ‘do I have what I need?’

Even if a control is working, it’s worth asking if it’s at its best. For example, is a tool producing too many unnecessary alerts? Does it need to be reconfigured? As anyone who drives a car can attest, every now and then it’s worth checking the pressure, pumping up the tyres, and measuring the oil level. Rather than only checking when something goes wrong, a periodic health check can uncover ways to improve on security. Think of it as good governance.

For organisations in a regulated sector, this risk assurance process can also save valuable time on preparing a brand-new document in an unfamiliar format, for when the regulator calls. It also cuts down on the time you need to prepare for an audit.

Who can use it

Just because the process I’m describing seems rigorous doesn’t mean it’s only suitable for large organisations. I see it being useful in lots of different scenarios or companies. It’s not about how many people are in your company, it’s about a mindset and approach to security. The risk assurance framework demonstrates that you have effective oversight and are monitoring the security controls you have in place.

I’m not suggesting that every company needs to do it, but if your role involves dealing with auditors or regulators, this will save you valuable time and effort. You can also adapt it for vetting any third-party providers. Even where such a document isn’t a requirement, it’s still good to have because it informs your understanding of your controls are, in plain English, in simple terms. It puts security in a language that a business owner understands, and that’s always a useful exercise.

 

The post The answer’s in the question: risk assurance that’s ready-made for a regulator appeared first on BH Consulting.

Preventing ‘Unexpected Change Syndrome’ with Change Management

According to the Mayo Clinic, plaque in your arteries and inflammation are usually to blame for coronary artery disease. Left unchecked, plaque buildup narrows arteries, decreasing blood flow to your heart and eventually causing chest pain (angina) and other symptoms. Because this develops over decades, you might not notice a problem until you have a […]… Read More

The post Preventing ‘Unexpected Change Syndrome’ with Change Management appeared first on The State of Security.

The State of Security: Preventing ‘Unexpected Change Syndrome’ with Change Management

According to the Mayo Clinic, plaque in your arteries and inflammation are usually to blame for coronary artery disease. Left unchecked, plaque buildup narrows arteries, decreasing blood flow to your heart and eventually causing chest pain (angina) and other symptoms. Because this develops over decades, you might not notice a problem until you have a […]… Read More

The post Preventing ‘Unexpected Change Syndrome’ with Change Management appeared first on The State of Security.



The State of Security

What Does PwC’s Annual Corporate Directors Survey Tell Us About Cyber Risks?

PwC released its 2017 Annual Corporate Directors Survey at the end of last year where it polled over 850 board directors from a wide range of organizations across a dozen industries. Among the topics covered in the survey were the usual board-level concerns about executive compensation, diversity, shareholder activism and environmental, social and governance issues.

But there were also two key areas of interest for those concerned about cyber risks: strategy oversight and board oversight of IT and security. “Considering the pace of change, companies and boards need to be agile in addressing threats to executing their current strategy, as well as disruptions to their entire business model,” the survey stressed.

Do You Have Enough Cybersecurity Expertise?

Directors reported very high levels on skill sets related to financial expertise (85 percent), risk management expertise (65 percent) and industry expertise (62 percent). However, when it comes to cybersecurity expertise, only 16 percent of companies report having enough. Thirty-nine percent of boards currently have some expertise in cybersecurity in their ranks but admit to needing more — and one-third of boards currently have no cybersecurity expertise and are seeking it out.

Who is tasked with oversight? Exactly half of the boards have delegated that responsibility to the audit committee, while 30 percent of companies look at cybersecurity as a full-board issue. Another 16 percent have cybersecurity reviewed by a dedicated risk committee or an IT steering committee. When asked whether the board needs to allocate more time to specific topics, the top three items reported are cybersecurity (66 percent), strategic planning (64 percent) and IT and digital strategy (61 percent).

Board Oversight: IT and Security

Board directors are reporting spending more time and attention (with ample room for improvement) on cybersecurity. But are they happy with the information they are receiving? When asked to evaluate the presentation skills of various groups, chief information security officers (CISOs) came in last place with only a 19 percent rating of excellent.

Does the increased level of board engagement translate into breach readiness? While 42 percent of respondents reported being “very comfortable” that their company had “appropriately tested its resistance to cyberattacks,” another 45 percent were only moderately comfortable. Asked about whether the company had adequately tested its cyber incident response plan (CIRP), only 32 percent of respondents reported being very comfortable, 49 percent moderately comfortable and 19 percent clearly labeled their organization’s current efforts as “not sufficient.”

Board Oversight: Strategy

Overall, the board gives management high marks on involving the board on strategy development and communicating the strategy to board members — but the numbers point to a disconnect regarding the quality of the information provided. Twenty-two percent of directors said the quality of the information they received regarding emerging and disruptive technologies — and their impact on enterprise strategy — was “lacking.”

Similarly, 23 percent of boards were not happy with the quality of information shared regarding the strategic options that management considered but ultimately rejected.

Given that the role of the board is to contribute to strategy development; oversee management’s implementation of the chosen strategy; and monitor the alignment of risks, performance and strategy, directors want access to quality information to ensure they achieve organizational objectives. Directors are especially concerned that strategy will need to change in the coming years due to factors like the speed of technological change and cyberthreats.

The Trouble With ‘Don’t Have It, Don’t Need It’

Obviously, IT and cybersecurity aren’t the only concerns on board directors’ minds. However, it is troubling to see that 10 percent of respondents indicated they didn’t have any IT and digital expertise on the board — and didn’t need it. In the same vein, as many as 4 percent of respondents acknowledged that cybersecurity was currently receiving no board oversight at all.

The survey cautions boards to be adequately engaged with the oversight of cybersecurity, noting that cybersecurity is an issue that affects the entire company, calling it a “pervasive risk” that needs the attention of the full board. It also recommends that each director understand the level of preparation of the company to detect, respond to and recover from a cybersecurity event.

Board directors — all the way down to the CISO — should follow these recommendations:

Understanding the overall state of strategic oversight and board oversight of IT and security across a number of industries could help you identify where your organization’s focus should be.

Listen to the podcast: Take Back Control of Your Cybersecurity Now

The post What Does PwC’s Annual Corporate Directors Survey Tell Us About Cyber Risks? appeared first on Security Intelligence.

Charities guide to better cybersecurity in 10 steps

Charities in Ireland face an increase in cybersecurity threats. Cybercrime incidents are increasing, and no-one is immune. Criminals have the means and the opportunity to target organisations for extortion, financial gain, or to steal valuable data. As the rate of attacks rises, so too are the costs to recover. As well as financial losses, a security incident could harm their reputation or set back their ability to deliver services.

Charities also face the challenge of complying with the forthcoming EU General Data Protection Regulation (GDPR). That is why BH Consulting has prepared this free guide to better security. Suitable for large and small charitable and non-profit groups, it contains 10 high-level, practical steps to address their most important security concerns and protect valuable data.

1. Audit your information

Understand what information you store, and where you store it.

2. Define your organisational risk

This lets you prioritise what’s most important and protect it on that basis.

3. Think data, not devices

Build a plan that focuses on protecting information no matter what IT hardware it’s on. Use encryption to ensure your most important data is safe.

4. Back up data

Make regular copies of your information – ideally several times daily – and store it in a separate location.

5. Install security software

Protect your laptops, smartphones, tablets and servers with continually updated anti-malware software on every device.

6. Implement a firewall

This critical protection system guards against many common security threats – but it’s just one part of a good defence, not the only solution.

7. Patch regularly

Most attacks target existing weaknesses. Keep all IT hardware and software up to date – especially anti-malware and firewall but also operating systems and apps.

8. Use strong passwords

Choosing a strong passphrase once is better than changing a bad one every 90 days. Use a password manager and enable two-factor authentication for important user accounts.

9. Conduct staff training

Awareness training for all staff keeps security top of everyone’s minds. Repeat regularly to foster positive security behaviour and culture, and include everyone in the organisation.

10. Manage user accounts

Configure your systems to prevent staff from accessing information if they don’t need it to do their work.

A charity’s information is valuable to criminals. More importantly, its donors and stakeholders have entrusted their data to it. That is why it is so important to protect this information. The 10 steps listed above are the first stage in improving protection controls. We also recommend that charities should prepare an incident response plan which they can implement if a data breach occurs.

More guidance is available from these resources:

Cyber Security: Small Business Guide

https://www.ncsc.gov.uk/blog-post/cyber-security-small-business-guide

Data security guidance from the Office of the Data Protection Commissioner

https://www.dataprotection.ie/docs/Data-security-guidance/1091.htm

Guidelines on how to respond to security breaches

https://cert.societegenerale.com/en/publications.html

 

 

 

The post Charities guide to better cybersecurity in 10 steps appeared first on BH Consulting.

Rising concerns about managing risk and proving compliance in the medical device industry

Perforce Software released the results of a global survey of medical device professionals. Key findings show increased concerns for mitigating risk and proving compliance during the development process. “Balancing compliance and risk management with fast-paced innovation is a tough challenge for medical device developers,” said Tim Russell, Chief Product Officer, Perforce. “This year’s survey results illustrate how well respondents are addressing the challenge.” Proving compliance and passing audits is critical in the medical device industry. … More

The post Rising concerns about managing risk and proving compliance in the medical device industry appeared first on Help Net Security.

Bumper to Bumper: Detecting and Mitigating DoS and DDoS Attacks on the Cloud, Part 2

This is the second installment in a two-part series about distributed denial-of-service (DDoS) attacks and mitigation on cloud. Be sure to read part one for an overview of denial-of-service (DoS) and DDoS attack variants and potential consequences for cloud service providers (CSPs) and their clients.

In the first installment of this series, we demonstrated how cybercriminals could circumvent DoS defenses by launching distributed DDoS attacks. The three major types of DDoS variants are:

  • Volume-based attacks
  • Protocol attacks
  • Application-layer attacks

We can demonstrate how these attacks work in a simulated environment using Graphical Network Simulator-3 (GNS3), a network simulation tool.

To understand this, first let’s break down the network diagram below:

A corporate network configured with OSPF and BGP

 

Figure 1: A corporate network configured with OSPF and BGP

The diagram shows a network designed with routers and configured with Open Shortest Path First (OSPF), the company’s internal network, Border Gateway Protocol (BGP), the edge router that reveals the internet service provider (ISP) to the end users and clients and other network devices.

Now let’s examine how threat actors can exploit these systems to launch various types of DoS and DDoS attacks.

Volume-Based DDoS Attacks

Cybercriminals typically leverage tools, such as Low Orbit Ion Cannon (LOIC) and Wireshark to facilitate volume-based attacks through techniques like Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flooding. Let’s take a closer look at how these attacks work.

TCP Flooding

In a TCP flooding attack, threat actors generate a large quantity of traffic to block access to the end resource. The magnitude of this type of attack is commonly measured in bits or packets per second. The diagrams below show a TCP flood attack in which the File Transfer Protocol (FTP) service is flooded with huge volumes of TCP traffic, which eventually brings down the service.

Figure 2: A user connecting to an FTP server hosted on a corporate network

Figure 3: An attacker using bots to send malicious traffic to the target port using the LOIC tool

Figure 4: A client unable to access the FTP service after an attacker has flooded it with corrupt FTP packets

UDP Flooding

UDP flooding means overwhelming the target network with packets to random UDP ports with a forged IP address. It is easy to use a forged IP address in this type of attack since UDP does not require a three-way handshake to establish a connection. These requests force the host to look for the application that is running on those random ports (which may or may not exist) and flood the network with Internet Control Message Protocol (ICMP) destination unreachable packets, thereby blocking legitimate requests.

There are other variations of UDP flooding, such as reflection and amplification attacks. In a reflection attack, a threat actor uses publicly available services, such as the Domain Name System (DNS), to attack the target networks. An amplification attack, on the other hand, targets a protocol in an attempt to amplify the response. For example, an attacker might submit a single query of *.ibm.com to the DNS, which will then gather a massive volume of information related to subdomains of IBM.com.

Figure 5 shows a similar attack using the Network Time Protocol (NTP). This protocol enables network-connected devices to communicate and synchronize time information, which is communicated over UDP. An attacker can forge the source IP address and then use a publicly available NTP application to send queries to the target. Common tools used in this type of attack include Nmap, Metasploit and Wireshark.

Figure 5: An attacker using Nmap to discover hosted NTP servers

Figure 6: An attacker using Metasploit to determine that the target NTP server is vulnerable to a MOD6 UNSETTRAP distributed, reflected denial-of-service (DRDoS) attack, an amplification of 2X packets

In this case, the victim’s response packet would be twice the size of the packet the NTP request sent. By repeatedly sending the request, an attacker could flood the target network with a huge number of responses.

Protocol Attacks

In the scenario shown below, an attacker sends multiple SYN request from several spoofed Internet Protocol (IP) addresses to a corporate network’s Secure Shell (SSH) jump server to disrupt the service. Tools such as Hping3 and Wireshark are commonly used in this type of attack.

Figure 7: A client (Ubuntu Machine) connecting to a company’s jump server (IP: 9.1.1.2) for remote administration

Figure 8: An attacker performing a protocol DDoS attack on a jump server (target IP: 9.1.1.2), preventing the client from accessing the jump server

Figure 9 shows a real-world exploit of a TCP SYNC flood attack performed on a web application as part of a penetration testing (PT) engagement.

TCP SYN flood

Figure 9: A web application becomes unresponsive after a TCP SYNC flood attack

Application-Layer Attacks

In addition to volume-based and protocol attacks, cybercriminals can also launch DDoS campaigns by targeting the application layer. Below are some variations of this attack type.

Slowloris

Slowloris is a very prominent attack in which the connection is never idle but, as the name suggests, it is slow. The client connects gradually by sending data and connection requests to the server. This keeps the connections open indefinitely and, as a result, the server cannot process any new connections. Threat actors typically use Slowhttptest and Wireshark to facilitate this attack.

Figure 10: A client accessing a web server hosted on a company’s cloud network

Figure 11: A legitimate user unable to access a webpage due to a Slowloris attack

Shown below is a real-world exploit of Slowloris performed on a web application as part of a penetration testing exercise.

Slowloris real exploit

Figure 12: A web application becomes unresponsive after a Slowloris attack

HTTP Flood

In an HTTP flood DDoS attack, the attacker sends an HTTP GET/POST request, which seems to be legitimate, to infiltrate a web server or application. Instead of using a forged IP address, this attack leverages botnets, which require less bandwidth. An HTTP flood attack is most effective when it forces the server or application to allocate the maximum resources possible in response to every single request.

Shown here is a real-world HTTP flood attack performed using a Session Initiation Protocol (SIP) INVITE message flood on port 5060, rendering the phone unresponsive.

SIP invite flood

Figure 13: An attacker performing a SIP INVITE flood attack on an IP phone

IP phone becomes unresponsive

Figure 14: The IP phone becomes unresponsive after the attack

DDoS Mitigation On Cloud

To mitigate DDoS attacks on the cloud, security teams must establish a secure perimeter around the cloud infrastructure and allow or drop packets based on specified rules. Below are some key steps organizations can take to harden their security environments to withstand DDoS attempts.

Next-Generation Firewalls

A next-generation firewall is capable of performing intrusion prevention and inline deep packet inspection. It can also detect and block sophisticated attacks, including DDoS, by enforcing security policies at the application, network and session layers. Next-generation firewalls give security teams granular control to define custom security rules pertaining to network traffic. They also provide myriad security features, such as secure sockets layer (SSL) inspection, web filtering and zero-day attack protection.

Content Delivery Network

A content delivery network (CDN) is a geographically distributed network of proxy servers and their data centers that accelerates the delivery of web content and rich media to users. Although CDNs are not built for DDoS mitigation, they are capable of deflecting network-layer threats and absorbing application-layer attacks at the network edge. A CDN leverages this massive scaling capacity to offer unsurpassed protection against volume-based and protocol DDoS attacks.

DDoS Traffic Scrubbing

A DDoS traffic scrubbing service is a dedicated mitigation platform operated by a third-party vendor. This vendor analyzes incoming traffic to detect and eliminate threats with the least possible downtime for the target network. When a DDoS attack is detected, all incoming traffic to the target network is rerouted to one or more of the globally distributed scrubbing data centers. Malicious traffic is then scrubbed and the remaining clean traffic is redirected to the target network.

Anomaly Detection

An anomaly, such as an unusually high volume of traffic from different IP addresses for the same application, should trigger an alarm. But anomaly detection is not quite that simple since attackers often craft packets to mimic real user transactions. Therefore, detection tools must be based on mathematical algorithms and statistics. This works well for both application-based and protocol attacks.

Source Rate Limiting

As the name suggests, source rate limiting blocks any excess traffic based on the source IP from where the attack originates. This is mainly used to limit volume-based traffic by configuring the thresholds and customizing responses when an attack happens. Source rate limiting provides insights into particular websites or applications on a granular level. The drawback is that this method only works for nonspoofed attacks.

Protocol Rate Limiting

This technique blocks suspicious protocols from any source. For example, the Internet Control Message Protocol (ICMP) can be blocked after a fixed rate — say, 5 megabits per second (Mbps) — thereby blocking bad traffic and allowing legitimate traffic. While it works well for volume-based attacks, the limitation of protocol rate limiting is that sometimes even legitimate traffic will be dropped, requiring security teams to manually analyze logs.

Cloud Security Is More Crucial Than Ever

With more and more applications now migrating to the cloud, it is more crucial than ever to secure cloud infrastructure and the applications hosted therein. The DDoS attacks described above can put CSPs and their clients at great risk of data compromise. By employing various defense mechanisms, such as advanced firewalls, traffic scrubbing and anomaly detection, organizations can take major steps toward securing their cloud environments from DDoS attacks.

The post Bumper to Bumper: Detecting and Mitigating DoS and DDoS Attacks on the Cloud, Part 2 appeared first on Security Intelligence.

Forget C-I-A, Availability Is King

In the traditional parlance of infosec, we've been taught repeatedly that the C-I-A triad (confidentiality, integrity, availability) must be balanced in accordance with the needs of the business. This concept is foundational to all of infosec, ensconced in standards and certification exams and policies. Yet, today, it's essentially wrong, and moreover isn't a helpful starting point for a security discussion.

The simple fact is this: availability is king, while confidentiality and integrity are secondary considerations that rarely have a default predisposition. We've reached this point thanks in large part to the cloud and the advent of utility computing. That is, we've reached a point where we assume uptime and availability will always be optimal, and thus we don't need to think about it much, if at all. And, when we do think about it, it falls under the domain of site reliability engineering (SRE) rather than being a security function. And that's a good thing!

If you remove availability from the C-I-A triad, you're then left with confidentiality and integrity, which can be boiled down to two main questions:
1) What are the data protection requirements for each dataset?
2) What are the anti-corruption requirements for each dataset and environment?

In the first case you quickly go down the data governance path (inclusive of data security), which must factor in requirements for control, retention, protection (including encryption), and masking/redaction, to name a few things. From an overall "big picture" perspective, we can then more clearly view data protection from an inforisk perspective, and interestingly enough it now makes it much easier to drill down in a quantitative risk analysis process to evaluate the overall exposure to the business.

As for anti-corruption (integrity) requirements, this is where we can see traditional security practices entering the picture, such as through ensuring systems are reasonably hardened against compromise, as well as appsec testing (to protect the app), but then also dovetailing back into data governance considerations to determine the potential impact of data corruption on the business (whether that be fraudulent orders/transactions; or, tampering with data, like a student changing grades or an employee changing pay rates; or, even data corruption in the form of injection attacks).

What's particularly interesting about integrity is applying it to cloud-based systems and viewing it through a cost control lens. Consider, if you will, a cloud resource being compromised in order to run cryptocurrency mining. That's a violation of system integrity, which in turn may translate into sizable opex burn due to unexpected resource utilization. This example, of course, once again highlights how you can view things through a quantitative risk assessment perspective, too.

At the end of the day, C-I-A are still useful concepts, but we're beyond the point of thinking about them in balance. In a utility compute model, availability is assumed to approach 100%, which means it can largely be left to operations teams to own and manage. Even considerations like DDoS mitigations frequently fall to ops teams these days, rather than security. Making the shift here then allows one to more easily talk about inforisk assessment and management within each particular vertical (confidentiality and integrity), and in so doing makes it much easier to apply quantitative risk analysis, which in turn makes it much easier to articulate business exposure to executives in order to more clearly manage the risk portfolio.

(PS: Yes, I realize business continuity is often lumped under infosec, but I would challenge people to think about this differently. In many cases, business continuity is a standalone entity that blends together a number of different areas. The overarching point here is that the traditional status quo is a failed model. We must start doing things differently, which means flipping things around to identify better approaches. SRE is a perfect example of what happens when you move to a utility computing model and then apply systems and software engineering principles. We should be looking at other ways to change our perspective rather than continuing to do the same old broken things.)

Achieving Cloud Security Through Gray Skies

Nearly a year ago, Judith Hurwitz, president and CEO of Hurwitz & Associates, made a cloud security prediction.

“Things will only get more challenging as businesses continue to move to multi-cloud environments,” said Hurwitz. “Businesses need the ability to manage a collection of different cloud-based services as a single unified environment.”

Despite the tentative position many companies took about transitioning applications, most organizations have gotten on board with embracing cloud computing — and what many are discovering is that they need more than one cloud.

“To further complicate this situation, many organizations faced with deciding where best to run their applications and store their data are now debating whether to work with a single CSP [cloud service provider] or to spread their workloads across multiple clouds,” said Peter Galvin, vice president of strategy at Thales eSecurity, to SC Media UK. “It’s not uncommon, for example, for medium and large enterprises to run SaaS [software-as-a-service], PaaS [platform-as-a-service] and IaaS [infrastructure-as-a-service] with different providers, in parallel with their own on-premise systems.”

As CSO pointed out, these hybrid and multi-cloud environments are often rife with risk, particularly because of poor visibility and lack of coordination.

The Roots of Compromised Records

Of all the compromised records tracked by X-Force in 2017, more than 2 billion were exposed because of misconfigured cloud servers, network-based backup incidents or other improperly configured systems. Many organizations lack a centralized view of all workloads across all of their environments — so it’s a challenge to manage and enforce security policies effectively.

Visibility is compromised when data is moved over to the cloud at a rapid pace. The increased workload creates a growing amount of unmanaged information security risk.

According to a 2017 report from RightScale, the percentage of enterprises that have to use multiple clouds grew to a large majority (85 percent). The report also reflects an increase in the number of enterprises planning for multiple public clouds (up from 16 percent to 20 percent). All signs indicate that skies are getting cloudier — which makes multi-cloud management seem hazier.

It’s no surprise that 39 percent of those who participated in the 2017 Fugue survey, State of Cloud Infrastructure Operations, reported that security compliance slows them down. Trying to implement a comprehensive management platform manually is complicated by the many components of on-premises systems, public cloud services, data services, software services, security components, networks and other connected devices.

Another security risk comes from fickle application programming interfaces (APIs), said Robin Schmitt, general manager at APAC at Neustar, in DatacenterDynamics. “Exposed APIs can leave enterprises vulnerable to breaches as they open the floodgate to DoS [denial-of-service]/DDoS [distributed denial-of-service] attacks. Consequently, poor management of multiple API networks on multiple clouds exponentially increases the risk of cyberattacks for businesses.”

Let the Next Generation Shine

Security is the top challenge related to managing multi-cloud environments. IBM and IDG research showed that the majority of organizations (77 percent) now view security through a different lens. A management platform that incorporates cognitive computing creates a framework that continues to learn and change as the overall environment evolves.

“Organisations operating in a multi-cloud environment will derive the most benefit from a consistent, integrated solution that will offer comprehensive data security along with the ability to effectively manage encryption keys across a range of diverse environments,” said Galvin.

They demand a multi-layered approach, which can very easily start to consume and constrain in-house IT resources. “Current policies that specify using a particular encryption technology or network security technology won’t fly” in a decentralized, multi-cloud environment said Nataraj Nagaratnam, engineer, CTO and director of Cloud Security at IBM.

Fortunately, technology innovators continue to develop tools to help customers meet the security challenges in multi-cloud. One example is the IBM Cloud Private platform, according to ZDNet, which includes the Cloud Automation Manager that scans applications and helps deploy them either on-premises or in a cloud.

One last key consideration when trying to determine the right security solutions for your multi-cloud environment is interoperability. Software-defined networks — along with multi-cloud data encryption and other next-generation technologies that defend across platforms — are layers that you can add on when designing a multi-cloud security strategy. Also, a cloud integration platform provides a single control point for several different technologies, including API management and secure gateway.

A business can certainly benefit from sharing security responsibility via a multiple-cloud-vendor relationship. However, it is critical you carefully evaluate third-party vendors. Everyone wants their tech to be agile and user-friendly — but no one will be able to get anything accomplished if your security is compromised.

Listen to the podcast: Cloud Data Security Trends, Challenges and Best Practices

found that the majority of organizations (77 percent) now view security through a different lens. A management platform that incorporates cognitive computing creates a framework that continues to learn and change as the overall environment evolves.

The post Achieving Cloud Security Through Gray Skies appeared first on Security Intelligence.

Navigating the Five Frequent Hazards of IT Security

Whenever there’s a data breach, it’s easy to get caught up in the root cause analysis – a misconfigured device, an unpatched application, an employee falling for a phishing attack,

The post Navigating the Five Frequent Hazards of IT Security appeared first on The Cyber Security Place.

The Cherry on Top: Add Value to Existing Risk Management Activities With Open Source Tools

Telling people about the virtues of open source security tools is like selling people on ice cream sundaes: It doesn’t take much of a sales pitch — and most people are convinced before you start.

It’s probably not surprising that most security professionals are already using open source solutions to put a cherry on top of their existing security infrastructure. From Wireshark to OpenVAS and Kali Linux, open source software is a key component in many security practitioners’ arsenal.

But despite the popularity of open source tools for technical tasks, practitioners often view risk management and compliance initiatives as outside the purview of open source. There are a few reasons for this. Open source projects that directly support these efforts are harder to come by, and there’s often less urgency to implement them compared to technical solutions that directly address security issues, for example. Although open source solutions aren’t always top of mind when it comes to these broader efforts, they can help IT teams maximize the value of their risk management frameworks and boost the organization’s overall security posture.

5 Ways to Supplement Your Risk Strategy Using Open Source Software

Two years ago, we compiled a list of free and open source tools to help organizations build out a systemic risk management strategy. Since much has changed in the cybersecurity world since then, let’s take a look at some additional tools that can help you cover more ground, drive efficiency and add value to your existing risk management strategy.

1. Threats and Situational Awareness

All security practitioners know risk is a function of the likelihood of a security incident and its potential impact should it come to pass. To understand these variables, it’s crucial to examine the threat environment, including what tools attackers use, their motivations and methods of operation, through formalized threat modeling.

When it comes to application security, threat modeling enables practitioners to unpack software and examine it from an attacker’s point of view. Tools like OWASP’s Threat Dragon and HTML5-based SeaSponge help security teams visually model app-level threats. If used creatively, they can be extended to model attack scenarios that apply to data and business processes as well. Security teams can also incorporate threat modeling directly into governance, risk management and compliance (GRC) processes to inform assessment and mitigation strategies.

2. Workflow Automation

Logistical and organizational considerations can have a significant impact on risk management. The process has a defined order of operations that might span long periods of time, and it takes discipline to see it through. For example, risk assessment should occur before risk treatment, and treatment should be completed before monitoring efforts begin.

It’s also important to account of interdependencies. It might be more effective to assess a RESTful front-end application before the back-end system it interfaces with, for instance. It’s all about timing: If you assess the risk associated with an application or business process today, what will happen a year from now when the business process has evolved? What if the underlying technology shifts or the business starts serving new customers? What about five years from now?

Process automation tools, such as ProcessMaker and Bonita, can help security teams support both of these aspects of the risk management process. These are not necessarily security solutions, but tools designed to build and automate workflows. In a security context, they can help analysts automate everything from policy approval to patch management. For risk management specifically, these tools help security teams ensure processes are followed correctly, and risks are reassessed after they’ve been completed.

3. Automated Validation

The process of implementing a risk mitigation strategy has two steps: The first is to select a countermeasure or control to address a certain risk. The second is to validate the effectiveness of that countermeasure. It can be extremely time-intensive to execute the second part of the process consistently.

The Security Content Automation Protocol (SCAP) can help security leaders ensure the validation step is performed consistently and completely. Introduced in National Institute of Standards and Technology (NIST) Special Publication 800-126, SCAP enables analysts to define vendor-agnostic security profiles for devices and automatically validate their configuration.

One benefit to using SCAP for validation is the degree of support in the security marketplace. Most vulnerability assessment tools natively support it, as do a number of endpoint management products. By employing tools, such as those available from the OpenSCAP project, security teams can derive value today and in the future as security toolsets evolve.

4. Documentation and Recordkeeping

Risk decisions made today are much more defensible in hindsight when there’s documentation to support them. Let’s say, for example, that you’ve evaluated a countermeasure and decided that the cost to implement it outweighs the risk — or that your organization has reviewed a risk and decided to accept it. Should the unthinkable happen (e.g., a data breach), it’s much easier to justify your decisions when there’s documentation supporting your analysis and the conclusions you’ve drawn. While any record-keeping tool can help you do this, a specialized solution, such as the community version of GRC Envelop, can add value because it was developed with risk activities in mind.

5. Metrics and Reporting

Finally, open source tools can support ongoing metrics gathering and risk reporting. There are numerous aspects of a risk program that are worth measuring, such as near-miss information (i.e., attacks that were stopped before causing damage), log data, mitigated incidents, risk assessment results, automated vulnerability scanning data and more.

Tools like Graphite are specifically and purposefully designed to help security professionals store time-series data — numeric values that change with time. Collecting and storing the data enables analysts to report on the risk associated with those assets. The more frequently they collect it, the closer they can get to producing a continuous view of the organization’s risk profile.

The Cherry on Top of Your Risk Management Strategy

As we’ve shown, there are quite a few open source alternatives out there that can add value to the risk management activities you may already be performing. By choosing the right tools to supplement your strategy, you can drive efficiency with your risk efforts, realize more valuable outcomes and improve your organization’s overall risk posture today and in the future.

Listen to the podcast series: Take Back Control of Your Cybersecurity now

The post The Cherry on Top: Add Value to Existing Risk Management Activities With Open Source Tools appeared first on Security Intelligence.

Playing It Smart for Data Controllers and Processors

Lots of people have been asking me lately about managing vendor relationships with General Data Protection Regulation (GDPR) in the mix. I tell them to think about watching a group of kids playing a board game where the kids have come up with their own rules. Those kids are having a great time until an adult steps in and tells them they need to play by the rules that came in the box.

At first, there’s going to be lots of frustration. Some kids may throw tantrums and leave the game, but others will decide to figure out how the rules change things. In the end, there’s a good chance they agree that the new rules actually make the game more fun. Even the kids who walked away are likely to end up rejoining. And then they all live happily ever after.

Of course, in real life, nothing is ever that simple.

So, when it comes to GDPR, how do we change the game midplay and deal with new obligations regarding controller and processor governance?

Defining the Roles of Data Controllers and Processors

Let’s start by defining what GDPR means when it refers to data controllers and processors:

  • A controller is “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data.”
  • A processor is “a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller.”

For example, a department store (controller) collects the data of its customers when they make purchases, while another organization (processor) stores, digitizes and catalogs all the information produced by the department store. The processors can be data centers or document management companies, but both the controller and the processor are responsible for handling the personal data of the store’s customers.

In the pre-GDPR world, European Union (EU) data protection responsibilities were outlined in the Data Protection Directive (officially Directive 95/46/EC), and controllers were responsible for compliance. Now, however, both data controllers and data processors have a shared level of responsibility and duty. They need to be in sync about what personal data is transferred and how it’s transferred, processed (respective of all active data subject rights and choices) and reported (providing accountability, access control and incident breach reporting). That means managing your entire data supply chain, regardless of where in the world the data processors and other parties are located. And there’s one more thing: Now you’ll likely need to have a contract in place unless the controller and processor are part of the same organization.

ASSESS THE PROGRESS OF YOUR GDPR JOURNEY WITH YOUR PERSONALIZED GUIDE TO GDPR READINESS

Developing a GDPR Governance Plan

Given all these changes, here are some things to keep in mind when you’re deciding how to develop your controller and processor governance plan.

Cover the Basics

Start with some basic rules. In general, you should consider three stages for your vendor compliance program: contractual readiness, ongoing governance, and compliance and audit. Contractual readiness entails enhancing your contractual relationship with your vendor in accordance with the tougher provisions of the GDPR statutes. Ongoing governance means enhancing your vendor prequalification and onboarding programs. And when it comes to compliance and auditing, consider which tools and processes you’ll need to implement to ensure your vendors are meeting their obligations and providing audit trails where necessary.

Classify Vendors by Risk Level

Take a time-out to create buckets so you can classify your vendors according to categories and their potential level of risk. You might want to consider what kind of data they collect, manage or process, the type and volume of processing involved and where that data is going to end up. It’s possible that you’ll need to take these steps more than once for a single vendor. For example, you could have multiple types of relationships with the same vendor, who might be providing both support and development — or even hosting.

Contact Your Vendors

Decide how you plan to contact vendors and which contracts you’re likely to put in place. If you’re a processor, you may want to reach out to your data controllers to get the ball rolling if they haven’t already done so.

Define Your TOMs

Bring your buddy TOM into the game. As you may recall from our earlier blog posts, TOM stands for technical and organizational measures. These are the uniform practices and standards you should require your vendors to adopt. If you’re a controller, be sure to have your TOMs well-defined and identify the specific controls you need. And if you’re a processor, take the time to assess your current contracts and any existing controls that could help meet these new obligations. It’s all good preparation for vendor discussions.

Communicate

If you’re a controller, you should create a formal communication plan for contacting vendors. For example, you could launch a mass mailing that sets out the new contract, terms and TOMs. Then confirm that the documents were received by the right individual and track your progress until you’ve succeeded in reaching everyone on your list.

Processors should review the new controls with everyone on their team, including the appropriate IT and security people. You should also perform a gap analysis and create an implementation plan that includes monitoring and reporting. Anticipate — and prepare for — controller audits and complaints from controllers or the Data Protection Authority (the GDPR-designated regulator).

Educate

As a controller, be aware that your smaller vendors may not even know about GDPR — so don’t assume otherwise. Consider providing some of the same training that you gave your employees. You’ll also want to educate your vendor management team, along with your marketing, human resources and product development groups. They often have greater insight into the nature of the supplier relationships, the types of data being handled and the relative maturity of the vendor. And that can help determine how you work with your vendors and handle any issues that may arise.

Document Your Moves

Create a central supplier lookup repository to help you gain visibility into your vendors. It should provide details about who has been notified, signed a contract, completed education and who has been audited. It should also include information about any exceptions that need to be addressed. And if you’re a controller, your repository should also provide a link into your data mapping repository, since Article 30 of GDPR requires that you identify both those processors who work with personal data and any additional vendors that may be involved.

Stay in the Game

If you’re a controller, you should specify the types of audits you’re planning to conduct, how frequently you plan to conduct them and how you’re going to track your progress. And you need to figure out how you’ll deal with vendors who aren’t meeting their obligations. Meanwhile, if you’re a processor, you need to determine who’s going to handle your audits.

In conclusion, vendor management requires an ongoing vendor governance and compliance program.

ASSESS THE PROGRESS OF YOUR GDPR JOURNEY WITH YOUR PERSONALIZED GUIDE TO GDPR READINESS

Notice: Clients are responsible for ensuring their own compliance with various laws and regulations, including GDPR. IBM does not provide legal advice and does not represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation. Learn more about IBM’s own GDPR readiness journey and our GDPR capabilities and offerings to support your compliance journey here.

The post Playing It Smart for Data Controllers and Processors appeared first on Security Intelligence.

Opinion: Don’t Be Blinded by APTs

In this industry perspective, Thomas Hofmann of Flashpoint says that sensational coverage of advanced persistent threat (APT) actors does little to help small and mid sized firms defend their IT environments from more common threats like cyber criminals. The key to getting cyber defense right is understanding the risks to your firm and...

Read the whole entry... »

Related Stories

Ransomware reminders force focus on prevention and planning

Ransomware reared its ugly head again recently, with some stark reminders that it’s still a serious business risk. A household name suffered what seemed a major infection, while it emerged that many victims never get their data back.

Last week, Boeing narrowly avoided a tailspin after a senior engineer alerted colleagues of a WannaCry infection. It appeared to threaten vital aircraft production systems, though after an investigation, Boeing described it as a “limited intrusion”.

Financial impact of ransomware

Boeing’s experience shows that companies face a financial impact beyond paying a ransom if criminals encrypt their data. Ransomware infections can also cause huge disruption as IT teams scramble to lock down the source and prevent further spread. At the time of writing, the city of Atlanta, Georgia was still restoring systems 10 days after an attack of the SamSam ransomware. The incident reportedly affected at least five municipal departments, disabling some city services and forcing others to revert to paper records.

According to SANS, in the past six months at least three other US companies suffered work stoppages due to WannaCry infections. Last year, more than 80 organisations in the UK National Health Service shut down their computers. All told, WannaCry led to 20,000 cancelled appointments, 600 GP surgeries using pen and paper, and five hospitals diverting ambulances.

Criminals don’t give money-back guarantees

Facing similar scenarios, many organisations might choose to pay up rather than risk prolonged disruption, lost revenue or angry customers. But recent surveys might cause them to pause before parting with their cash. A report from CyberEdge found that 51 per cent of ransomware victims who paid the ransom never got their files back. A separate study from SentinelOne had similarly depressing news. It found that 45 per cent of US companies infected last year paid at least one ransom, but only 26 per cent of them had their files unlocked afterwards.

BH Consulting advises victims not to pay the ransom. As the surveys above tell us, payment is no guarantee of recovering files. “Criminals prove to be untrustworthy” was The Register’s snarky but accurate take on the story. Paying also encourages criminals that a business is an easy mark. TechRepublic noted that 73 per cent of organisations that paid the ransom were targeted and attacked again.

Take preventative steps

The key with ransomware is to prevent it before it spreads. Last year, BH Consulting published a guide to preventing ransomware infections just as some of the biggest outbreaks took hold. The document includes technical and business-process steps to avoid further infection. Given the latest developments, now seems like a good time to revisit those recommendations. They include:

  • Review and regularly test backup processes – still the most effective way to recover
  • Establish a baseline of normal network behaviour – unusual activity will be easier to spot
  • Segment your network – this will limit the ability of worms and other infections to spread
  • Implement ad blocking – to stop compromised adverts from delivering malware
  • Review security of mobile devices – because ransomware is migrating to mobiles

You can download the free guide here. Another useful resource is the NoMoreRansom initiative, which is a partnership between law enforcement and industry. It provides free tools to decrypt  many common types of ransomware. BH Consulting is among the partners from across the private and public sectors.

Let’s wrap up with some encouraging news. The CyberEdge report found that just 13 per cent of companies that refused to pay lost their files. In other words, 87 per cent subsequently recovered their data. It bears repeating: prevention, not payment, is a better way to keep ransomware out of your business.

The post Ransomware reminders force focus on prevention and planning appeared first on BH Consulting.

Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest … Read More

FINRA: Cyber Security Still a Major Threat to Broker-Dealers

Latest FINRA Examination Findings Reveal That Firms Have Made Progress with Cyber Security, but Problems Remain Cyber security remains “one of the principal operational risks facing broker-dealers,” according to the FINRA 2017 Examination Findings Report, and while progress has been made, many broker-dealer firms still have work to do to protect themselves against hackers. Firms… Read More

The post FINRA: Cyber Security Still a Major Threat to Broker-Dealers appeared first on .

Science of CyberSecurity: Thoughts on the current state of Cyber Security

As part of a profile interview for Science of Cybersecurity I was asked five questions on cyber security last week, here's question 1 of 5.


Q. What are your thoughts on the current state of cybersecurity, both for organizations and for consumers?
Thanks to regular sensational media hacking headlines most organisational leaders are worried about their organisation’s cyber security posture, but they often lack the appropriate expert support in helping them properly understand their organisation’s cyber risk. To address the cyber security concern, an ‘off the peg’ industry best practice check box approach is often resorted to. However, this one-size-fits-all strategy is far from cost effective and only provides limited assurance in protecting against modern cyber attacks, given every organisation is unique, and cyber threat adversaries continually evolve their tactics and methodologies. In these difficult financial times of limiting cyber security budgets, it is important for the cyber security effort to be prioritised and targeted. To achieve this, the cyber security strategy should be born out of threat intelligence, threat assessing and a cyber risk assessment. This provides organisational leaders with the information to take effective cyber security strategy decisions, and to allocate funding and resources based on a subject matter they do understand well, business risk. Nothing can ever be 100% safeguarded; cyber security is and always should be a continual risk based undertaking, and requires an organisation risk tailored cyber security strategy, which is properly understood and led from the very top of the organisation. This is what it takes to stay ahead in the cyber security game.

Security is Not, and Should not be Treated as, a Special Flower

My normal Wednesday lunch yesterday was rudely interrupted by my adequate friend and reasonable security advocate Javvad calling me to ask my opinion on something. This in itself was surprising enough, but the fact that I immediately gave a strong and impassioned response told me this might be something I needed to explore further… The UK … Read More

Making the world angrier, one process at a time

I have recently set up Family Sharing on my iOS devices, so that I can monitor and control what apps go on my kids devices without having to be in the room with them. Previously they would ask for an app, and I would type in my AppleID password and that was  that. Unfortunately with … Read More