Modern organizations are no longer governed by fixed perimeters. In fact, the perimeter-based security model is disintegrating in a world where users work on their own devices from anywhere, and sensitive company data is stored in multiple cloud services. Organizations can no longer rely on binary security models that focus on letting good guys in and keeping bad guys out. Their big challenge is figuring out how to give users the access they need while … More
Pradeo Secure Private Store facilitates and expands safe BYOD usage Pradeo launched a unique Secure Private Store solution that allows organizations to distribute mobile services to their collaborators (public and private apps, documents), that they can freely use under the condition that their device does not bear any threat. Elastic blends SIEM and endpoint security into a single solution for real-time threat response Elastic Endpoint Security is based on Elastic’s acquisition of Endgame. Now, when … More
The post New infosec products of the week: October 18, 2019 appeared first on Help Net Security.
A record 61 percent of enterprises worldwide are on the path to becoming “intelligent,” compared to only 49 percent in 2018. The Zebra Technologies Corporation global survey analyzes the extent to which companies connect the physical and digital worlds to drive innovation through real-time guidance, data-powered environments and collaborative mobile workflows. Their “Intelligent Enterprise” Index scores are calculated using 11 criteria that include Internet of Things (IoT) vision, adoption, data management, intelligent analysis and more. … More
The post Security still top priority as more enterprises scale IoT solutions company-wide appeared first on Help Net Security.
Car manufacturers offer more software features to consumers than ever before, and increasingly popular autonomous vehicles that require integrated software introduce security vulnerabilities. Widespread cloud connectivity and wireless technologies enhance vehicle functionality, safety, and reliability but expose cars to hacking exploits. In addition, the pressure to deliver products as fast as possible puts a big strain on the security capabilities of cars, manufacturing facilities, and automotive data, a IntSights report reveals. “The automotive manufacturing industry … More
The post As car manufacturers focus on connectivity, hackers begin to exploit flaws appeared first on Help Net Security.
Data is the most valuable asset/resource on Earth. Still, we have little or no control over who is exploiting ours without our consent. That is what the authors, Jehane Noujaim and Karim Amer, want to make us realize in their documentary film The Great Hack, released by Netflix on July 24, 2019. Jehane Noujaim, American documentary film director, and Karim Amer, Egyptian-American film producer and director, already worked together on The Square (2013), but it … More
Increasing spend efficiency and cutting waste are challenging with respect to gaining visibility into costs and managing IT spend effectively, according to Flexera survey. Survey respondents are IT executives working in large enterprises with 2,000 or more employees, headquartered in North America and Europe, encompassing industries such as financial services, retail, e-commerce and industrial products. More than half are C-level executives. Managing IT spending The top challenge to managing spend effectively, cited by 86 percent … More
The post Companies are shifting spending to support their critical IT initiatives appeared first on Help Net Security.
Industrial organizations face a growing list of digital threats these days. Back in April 2019, for instance, FireEye revealed that it had observed an additional intrusion by the threat group behind the destructive TRITON malware at another critical infrastructure. This discovery came less than two years after the security firm discovered an attack in which […]… Read More
The post NIST SP 1800-23, Energy Sector Asset Management: Securing Industrial Control Systems appeared first on The State of Security.
Trustwave announced Trustwave Security Testing Services, a comprehensive portfolio that gives enterprises and government agencies the unprecedented ability to acquire, apply and fully manage security scanning and testing across diverse environments through a single dashboard. Through simple point-and-click navigation, users can scan business critical applications to search for unpatched vulnerabilities, exploitable code or evidence of malicious activity. Standard or highly customized penetration tests can be scheduled as needed to assess network, application and database weaknesses … More
The post Trustwave Security Testing Services connects organizations to security resources appeared first on Help Net Security.
Red Hat, the world’s leading provider of open source solutions, announced Red Hat OpenShift 4.2, the latest version of Red Hat’s trusted enterprise Kubernetes platform designed to deliver a more powerful developer experience. Red Hat OpenShift 4.2 extends Red Hat’s commitment to simplifying and automating enterprise-grade services across the hybrid cloud while empowering developers to innovate and enhance business value through cloud-native applications. Empowering developer innovation on Kubernetes Red Hat OpenShift 4.2 aims to make … More
The post Red Hat OpenShift 4.2 delivers new developer client tools and local development capabilities appeared first on Help Net Security.
Zoho Corporation, a global, privately held company that offers the most comprehensive suite of business software applications in the industry, announced the release of Catalyst, the company’s visionary full-stack serverless platform for developers. Catalyst is a simple to use yet extremely powerful offering that allows developers to create and run microservices and applications. Catalyst gives developers access to the same underlying services and frameworks that power Zoho’s 45+ applications used by more than 45 million … More
The post Zoho launches Catalyst, a full-stack serverless platform for developers appeared first on Help Net Security.
The cloud is changing the way we build and deploy applications. Most enterprises will benefit from the cloud’s many advantages through hybrid, multi, or standalone cloud architectures. A recent report showed that 42 percent of companies have a multi-cloud deployment strategy.
The advantages of the cloud include flexibility, converting large upfront infrastructure investments to smaller monthly bills (for example, the CAPEX to OPEX shift), agility, scalability, the capability to run applications and workloads at high speed, as well as high levels of reliability and availability.
However, cloud security is often an afterthought in this process. Some worry that it may slow the momentum of organizations that are migrating workloads into the cloud. Traditional IT security teams may be hesitant to implement new cloud security processes, because to them the cloud may be daunting or confusing, or just new and unknown.
Although the concepts may seem similar, cloud security is different than traditional enterprise security. Additionally, there may also be industry-specific compliance and security standards to be met.
Public cloud vendors have defined the Shared Responsibility Model where the vendor is responsible for the security “of” their cloud, while their customers are responsible for the security “in” the cloud.
The Shared Responsibility Model (Source: Microsoft Azure).
Cloud deployments include multi-layered components, and the security requirements are often different per layer and per component. Often, the ownership of security is blurred when it comes to the application, infrastructure, and sometimes even the cloud platform—especially in multi-cloud deployments.
Cloud vendors, including Microsoft, offer fundamental network-layer, data-layer, and other security tools for use by their customers. Security analysts, managed security service providers, and advanced cloud customers recommend layering on advanced threat prevention and network-layer security solutions to protect against modern-day attacks. These specialized tools evolve at the pace of industry threats to secure the organization’s cloud perimeters and connection points.
Check Point is a leader in cloud security and the trusted security advisor to customers migrating workloads into the cloud.
Check Point’s CloudGuard IaaS helps protect assets in the cloud with dynamic scalability, intelligent provisioning, and consistent control across public, private, and hybrid cloud deployments. CloudGuard IaaS supports Azure and Azure Stack. Customers using CloudGuard IaaS can securely migrate sensitive workloads, applications, and data into Azure and thereby improve their security.
But how well does CloudGuard IaaS conform to Microsoft’s best practices?
Principal Program Manager of Azure Networking, Dr. Reshmi Yandapalli (DAOM), published a blog post titled Best practices to consider before deploying a network virtual appliance earlier this year, which outlined considerations when building or choosing Azure security and networking services. Dr. Yandapalli defined four best practices for networking and security ISVs—like Check Point—to improve the cloud experience for Azure customers.
I discussed Dr. Yandapalli’s four best practices with Amir Kaushansky, Check Point’s Head of Cloud Network Security Product Management. Amir’s responsibilities include the CloudGuard IaaS roadmap and coordination with the R&D/development team.
1. Azure accelerated networking support
Dr. Yandapalli’s first best practice in her blog is that the ISV’s Azure security solution is available on one or more Azure virtual machine (VM) type with Azure’s accelerated networking capability to improve networking performance. Dr. Yandapalli recommends that you “consider a virtual appliance that is available on one of the supported VM types with Azure’s accelerated networking capability.”
The diagram below shows communication between VMs, with and without Azure’s accelerated networking:
Accelerated networking to improve performance of Azure security (Source: Microsoft Azure).
Kaushansky says, “Check Point was the first certified compliant vendor with Azure accelerated networking. Accelerated networking can improve performance and reduce jitter, latency, and CPU utilization.”
According to Kaushansky—and depending on workload and VM size—Check Point and customers have observed at least a 2-3 times increase in throughput due to Azure accelerated networking.
2. Multi-Network Interface Controller (NIC) support
Dr. Yandapalli’s blog’s next best practice is to use VMs with multiple NICs to improve network traffic management via traffic isolation. For example, you can use one NIC for data plane traffic and one NIC for management plane traffic. Dr. Yandapalli states, “With multiple NICs you can better manage your network traffic by isolating various types of traffic across the different NICs.”
The diagram below shows the Azure Dv2-series with maximum NICs per VM size:
Azure Dv2-series VMs with # NICs per size.
CloudGuard IaaS supports multi-NIC VMs, without any maximum of the number of NICs. Check Point recommends the use of VMs with at least two NICs—VMs with one NIC are supported but not recommended.
Depending on the customer’s deployment architecture, the customer may use one NIC for internal East-West traffic and the second for outbound/inbound North-South traffic.
3. High Availability (HA) port with Azure load balancer
The Dr. Yandapalli’s third best practice is that Azure security and networking services should be reliable and highly available.
Dr. Yandapalli suggests the use of a High Availability (HA) port load balancing rule. “You would want your NVA to be reliable and highly available, to achieve these goals simply by adding network virtual appliance instances to the backend pool of your internal load balancer and configuring a HA ports load-balancer rule,” says Dr. Yandapalli.
The diagram below shows an example usage of a HA port:
Flowchart example of a HA port with Azure load balancer.
Kaushansky says, “CloudGuard IaaS supports this functionality with a standard load balancer via Azure Resource Manager deployment templates, which customers can use to deploy CloudGuard IaaS easily in HA mode.”
4. Support for Virtual Machine Scale Sets (VMSS)
The Dr. Yandapalli’s last best practice is to use Azure VMSS to provide HA. These also provide the management and automation layers for Azure security, networking, and other applications. This cloud-native functionality provides the right amount of IaaS resources at any given time, depending on application needs. Dr. Yandapalli points out that “scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs.”
In a similar way to the previous best practice, customers can use an Azure Resource Manager deployment template to deploy CloudGuard in VMSS mode. Check Point recommends the use of VMSS for traffic inspection of North-South (inbound/outbound) and East-West (lateral movement) traffic.
Learn more and get a free trial
As you can see from the above, CloudGuard IaaS is compliant with all four of Microsoft’s common best practices for how to build and deploy Azure network security solutions.
Visit Check Point to understand how CloudGuard IaaS can help protect your data and infrastructure in Microsoft Azure and hybrid clouds and improve Azure network security. If you’re evaluating Azure security solutions, you can get a free 30-day evaluation license of CloudGuard IaaS on Azure Marketplace!
(Based on a blog published on June 4, 2019 in the Check Point Cloud Security blog.)
Mavenir, a US headquartered Network Software Provider, and the industry’s only end-to-end cloud-native vendor for CSPs, has announced the launch and availability of its fully virtualized 4G/5G OpenRAN solution. Operators can evolve their network now to a hardware agnostic, cost effective solution and be ready for 5G with a software upgrade. Mavenir’s OpenRAN extends virtualization to the edge of the network and provides strategic differentiation by enabling multisource Remote Radio Units (RRUs) to interwork with … More
The post Mavenir unveils fully virtualized 4G/5G OpenRAN solution appeared first on Help Net Security.
Denim Group, the leading independent application security firm, announced an integration with Snyk, the leader in developer-first open source security. This integration allows customers to find and fix open source code vulnerabilities within the ThreadFix platform, empowering developers to better manage software security vulnerabilities through the platform’s comprehensive view of open source and proprietary code. Open source code is critical to modern application development, as it allows developers to save time and reuse community work … More
The post Denim Group and Snyk help developers manage vulnerabilities within their open source dependencies appeared first on Help Net Security.
There’s something ironic about cybercriminals getting “hacked back.” BriansClub, one of the largest underground stores for buying stolen credit card data, has itself been hacked. According to researcher Brian Krebs, the data stolen from BriansClub encompasses more than 26 million credit and debit card records taken from hacked online and brick-and-mortar retailers over the past four years, including almost eight million records uploaded to the shop in 2019 alone.
Most of the records offered up for sale on BriansClub are “dumps.” Dumps are strings of ones and zeros that can be used by cybercriminals to purchase valuables like electronics, gift cards, and more once the digits have been encoded onto anything with a magnetic stripe the size of a credit card. According to Krebs on Security, between 2015 and 2019, BriansClub sold approximately 9.1 million stolen credit cards, resulting in $126 million in sales.
Back in September, Krebs was contacted by a source who shared a plain text file with what they claimed to be the full database of cards for sale through BriansClub. The database was reviewed by multiple people who confirmed that the same credit card records could also be found in a simplified form by searching the BriansClub website with a valid account.
So, what happens when a cybercriminal, or a well-intentioned hacker in this case, wants control over these credit card records? When these online fraud marketplaces sell a stolen credit card record, that record is completely removed from the inventory of items for sale. So, when BriansClub lost its 26 million card records to a benign hacker, they also lost an opportunity to make $500 per card sold.
What good comes from “hacking back” instances like this? Besides the stolen records being taken off the internet for other cybercriminals to exploit, the data stolen from BriansClub was shared with multiple sources who work closely with financial institutions. These institutions help identify and monitor or reissue cards that show up for sale in the cybercrime underground. And while “hacking back” helps cut off potential credit card fraud, what are some steps users can take to protect their information from being stolen in the first place? Follow these security tips to help protect your financial and personal data:
- Review your accounts. Be sure to look over your credit card and banking statements and report any suspicious activity as soon as possible.
- Place a fraud alert. If you suspect that your data might have been compromised, place a fraud alert on your credit. This not only ensures that any new or recent requests undergo scrutiny, but also allows you to have extra copies of your credit report so you can check for suspicious activity.
- Consider using identity theft protection. A solution like McAfee Identify Theft Protection will help you to monitor your accounts and alert you of any suspicious activity
The post Hack-ception: Benign Hacker Rescues 26M Stolen Credit Card Records appeared first on McAfee Blogs.
SailPoint Technologies Holdings, the leader in enterprise identity governance, announced it has completed the acquisition of two companies, Orkus and OverWatchID. With these two acquisitions, SailPoint is delivering on its mission to help organizations govern access to all applications, including the rapidly emerging cloud infrastructures on which their digital business are built. “As the adoption of cloud applications and IaaS environments like AWS, Azure and Google Cloud continues to skyrocket, organizations today need to better … More
The post SailPoint acquires Orkus and OverWatchID to help orgs govern access to all applications appeared first on Help Net Security.
RSA Conference, the world’s leading information security conferences and expositions, announced that submissions for the 15th annual RSAC Innovation Sandbox Contest and second annual RSAC Launch Pad are now open. The two initiatives are designed to encourage innovation, growth and support for startups within the ever-changing cybersecurity community. Since 2005, the RSAC Innovation Sandbox Contest has brought together the most promising young companies in the cybersecurity space to compete for the title of “Most Innovative … More
The post Submissions for RSAC Innovation Sandbox Contest and RSAC Launch Pad now open appeared first on Help Net Security.
ENISA not only celebrates the nomination of its new Executive Director, but also 15 years of successfully keeping Europe cyber secure. The Management Board of ENISA designated Mr. Juhan Lepassaar in July. He will be leading the Agency, which has just achieved a permanent mandate within the provisions of the Cybersecurity Act (CSA) upgrading ENISA to a new phase of its history. Mr. Juhan Lepassaar is a citizen of Estonia. Dedicated to the European Union, … More
The post ENISA appoints Mr. Juhan Lepassaar as new Executive Director appeared first on Help Net Security.
The prices for specific types of cybercriminal tools on darknet sites continue to rise, according to a recent analysis by security firm Flashpoint. Payment card and passport data remain the most sought-after commodities on these forums, research shows.
Researchers at Cyberbit spotted a crypto mining campaign that infected more than 50% of the European airport workstations.
Security experts at Cyberbit have uncovered a crypto mining campaign that infected more than 50% of the European airport workstations.
European airport systems were infected with a Monero
“While rolling out Cyberbit’s Endpoint Detection and Response (EDR) in an international airport in Europe, our researchers identified an interesting crypto mining infection, where
Experts pointed out that the Monero miners were installed on the European airport systems, even if they were running an industry-standard antivirus. Threat actors were able to package the miner evading the detection of ordinary antivirus.
The good news is that the miner did not impact the airport’s operations.
Experts’ behavioral engine detected a suspicious usage of the PAExec tool used to execute an application named player.exe.
Experts also observed the use of Reflective DLL Loading after running player.exe. The technique allows the attackers to remotely inject a DLL directly into a process in memory.
“This impacts the performance of other applications, as well as that of the airport facility. The use of administrative privileges also reduces the ability for security tools to detect the activity.” continues the report.
In order to gain persistence, attackers added an entry in the systems’ registries for the PAExec.
At the time, researchers were not able to determine how attackers infected the European Airport systems.
“Because the malware happened to be a
“In a worst-case scenario, attackers could have breached the IT network as a means to hop onto the airport’s OT network in order to compromise critical operational systems ranging from runway lights to baggage handling machines and the air-train, to name a few of the many standard airport OT systems that could be cyber-sabotaged to cause catastrophic physical damage,”
The post Cryptocurrency miners infected more than 50% of the European airport workstations appeared first on Security Affairs.
Scammers are using the notorious Phorpiex botnet as part of an ongoing "sextortion" scheme, according to Check Point researchers. At one point, the botnet was sending out over 30,000 spam emails an hour and the attackers made about $110,000 in five months, researchers say.
A New Strain of Malware Is Terrorizing Docker Hosts
For the first time in history, researchers have discovered a crypto-jacking worm that spreads via unsecured Docker hosts.
Researchers at Unit 42 said that the new strain of malware has spread to more than 2,000 Docker hosts by using containers in the Docker Engine (Community Edition).
The new worm has been named Graboid after the fictional subterranean sandworms that made a fairly poor show of hunting humans in nineties flick Tremors. Just like its onscreen predecessors, the Graboid is quick but relatively incompetent.
Graboid is designed to work in a randomized way that researchers said holds no obvious benefits. The malware carries out both worm-spreading and crypto-jacking inside containers, picking three targets at each iteration.
Researchers wrote: "It installs the worm on the first target, stops the miner on the second target, and starts the miner on the third target. This procedure leads to a very random mining behavior.
"If my host is compromised, the malicious container does not start immediately. Instead, I have to wait until another compromised host picks me and starts my mining process. Other compromised hosts can also randomly stop my mining process. Essentially, the miner on every infected host is randomly controlled by all other infected hosts."
Graboid doesn't hang around for long, mining cryptocurrency Monero for an average of just over four minutes before picking new vulnerable hosts to target. The worm works by gaining an initial foothold through unsecured Docker daemons, where a Docker image was first installed to run on the compromised host.
Researchers warned that Graboid's nip could potentially turn into a powerful bite and advised organizations to safeguard their Docker hosts.
Researchers wrote: "While this crypto-jacking worm doesn’t involve sophisticated tactics, techniques, or procedures, the worm can periodically pull new scripts from the C2s, so it can easily repurpose itself to ransomware or any malware to fully compromise the hosts down the line and shouldn’t be ignored."
Tim Erlin, VP, product management and strategy at Tripwire, advised developers to tackle security sooner rather than later.
He said: "DevOps tends to favor velocity over security, but when you have to stop what you’re doing to address an incident like this, you’re losing the velocity gains you might have experienced by leaving security out of the DevOps lifecycle. Addressing security through incident response is the most expensive method to employ."
What is a Cookie?
Cookies (aka. HTTP cookies, session cookies, browser cookies, web cookies, or tracking cookies) are used by almost all websites to keep track of site users’ sessions. While you might not like the idea that a website is tracking you, cookies actually provide a very convenient function. Without them, websites you regularly visit wouldn’t be able to remember you or what content they should serve you. For example, if you added items to an online shopping cart and then navigated away without purchasing, that cart would be lost. You’d have to go back and add everything all over again when you were finally ready to buy. If it weren’t for cookies, our web experiences would be entirely different (and much more frustrating).
In cases like the previous example, the use of tracking cookies is pretty benign and helps smooth the user’s online experience overall. So, if cookies can provide a beneficial service, why do we need privacy laws like GDPR? The answer is because of a specific type of cookie, i.e. third-party tracking cookies. These are created by domains other than the one you are actively visiting. They run silently in the background, tracking you and your online habits without your notice and compiling long-term records of your browsing behavior. These are typically used by advertisers to serve ads “relevant” to the user even as they navigate unrelated parts of the web.
Who Serves Cookies and Why?
By far the most prolific servers of third-party cookies are Google and Facebook. To help businesses target and track advertisements, both Google and Facebook both suggest embedding a tracking pixel—which is just a short line of code—into business websites. These pixels then serve up cookies, which allow the site owner to track individual user and session information.
The tracking doesn’t stop there. To optimize their marketing tools for all users, Google and Facebook both track and store this data in their own databases for processing through their own algorithms. Even if you’re not currently logged in to Facebook, your session data can still be tracked by your IP address.
What is People-Based Targeting?
Google and Facebook’s ad platforms work incredibly well because they pair cookie data with an existing bank of user data that most of us have willingly (or unwillingly) given them. Your Facebook account, Instagram account, Gmail, and Google Chrome accounts are all linked to larger systems that inform sophisticated advertising networks how to appeal to you, specifically, as a consumer. This way, websites can serve you ad content you’re likely to click on, no matter which sites you’re actively visiting. Combining traditional cookie tracking with these types of in-depth user profiles is called “people-based targeting” and it’s proven to be an incredibly powerful marketing tactic.
How to Protect Your Data
The sad truth is that you’ll never fully escape tracking cookies, and, frankly, you probably wouldn’t want to. As mentioned above, they streamline your online experiences in a pretty significant way. What you can do is reduce the breadth of their reach in your digital life. Here are a few key ways to do that.
- Stay vigilant. Be sure to read the privacy policies before you accept them. This advice goes beyond the GDPR-compliant pop-ups that have become so prevalent in the last year. Keep in mind that tech giants are often interconnected, so it’s important to be aware of all the privacy policies you’re being asked to accept.
- Clean house. You don’t have to do it often, but clear your cookie cache every once in a while. There are plusses and minuses here; clearing your cache will wipe away any long-term tracking cookies, but it will also wipe out your saved login information. But don’t let that deter you! Despite that sounding like a hassle, you may find your browser performance improves. Exact steps for how to clear your cookies will depend on your browser, but you’ll find plenty of guides online. Don’t forget to clear the cache on your mobile phone as well.
- Use a VPN. Most of all, we recommend installing a virtual private network (VPN) on all of your devices. VPNs wrap your web traffic in a tunnel of encryption, which will prevent tracking cookies from following you around the web. Make sure you use a reputable VPN from a trusted source, such as Webroot® WiFi Security. A number of the supposedly free VPN options may just sell your data to the highest bidder themselves.
Cookie tracking and digital ad delivery are growing more sophisticated every day. Check back here for the latest on how these technologies are evolving, and how you can prepare yourself and your family to stay ahead.
The post Cookies, Pixels, and Other Ways Advertisers are Tracking You Online appeared first on Webroot Blog.
Imposter Emails Plague Healthcare Industry
A study looking at cyber-attacks on the healthcare industry has found that 95% of targeted companies encounter emails spoofing their own trusted domain.
To create the Protecting Patients, Providers, and Payers 2019 Healthcare Threat Report, cybersecurity company Proofpoint analyzed nearly a year’s worth of cyber-attacks against care providers, pharmaceutical/life sciences organizations, and health insurers.
Hundreds of millions of malicious emails later, it was clear to researchers that cyber-criminals were not just attacking infrastructure, but were also using email to directly target people.
Analyzing data spanning the second quarter of 2018 to the first quarter of 2019, researchers found that at each healthcare organization attacked, an average of 65 staff members were targeted.
Researchers observed a preference for certain keywords in the spoof emails attackers sent when attempting to con money or information out of the patients and business partners of healthcare organizations. When sending emails designed to look like they came from a healthcare provider, criminals commonly used the words "payment," "request," and "urgent" in the subject line.
Healthcare organizations targeted by impostor emails received 43 messages of this type in Q1 2019—a 300% jump from a year ago and more than five times the volume in Q1 2017. Not a single organization analyzed in the study saw a decrease in impostor attacks over that period, and more than half were attacked more often in Q1 2019 than they were in Q1 2017.
The average impostor attack spoofed 15 healthcare staff members on average across multiple messages.
According to researchers, threat actors were adept at knowing just what to put in an email to spur healthcare staff into transferring money or sharing sensitive information.
Researchers wrote: "Attackers have grown skilled at researching their targets and using social engineering to exploit human nature. Some lures are just too well researched, expertly crafted, and psychologically potent to resist every time.
"Social engineering works because it taps into the way the human brain works. It uses deep-rooted impulses—such as fear, desire, obedience, and empathy—and turns them against you. And it hijacks your normal thought process to spur you to act on attackers’ behalf."
Morning was the attackers' favorite time to strike, with the largest volume of imposter email sent between 7 a.m. and 1 p.m. in the time zone of the targeted organization.
In the latest move to expand its edge computing business, Intel Corp. has agreed to purchase Smart Edge computing platform, an indirect subsidiary of Pivot, for US$27 million on Oct. 15.
Smart Edge is a virtualized mobile smart edge (MEC) computing platform that provides services to enterprises and businesses at the edge. The technology enables services including personalized ads, digital signage, VR and AR, and product information. It can be linked with various IoT devices to provide data analytics.
The platform can also create caching points to quickly deliver information with lower latency. Decentralizing data delivery across many nodes across the network is faster than delivering data from large, centralized data centres.
By scooping up Pivot’s Smart Edge, Intel is looking to further expand into the 5G market, where IoT and edge computing is expected to boom. In its press release, the company said that it’s expecting the 5G silicon addressable market to reach $65 billion by 2023.
Intel exited the 5G smartphone modem business in April this year and transitioned its focus towards network infrastructure and data centres.
Reuters reported that Smart Edge did not generate much revenue in the first half of 2019.
The acquisition is expected to complete in the fourth quarter of 2019. Intel said around 25 Smart Edge employees will join Intel when the transaction closes.
Recruitment Sites Expose Personal Data of 250k Jobseekers
The personal details of 250,000 American and British jobs seekers have been exposed after two online recruitment companies failed to set their cloud storage folders as private.
Each company stored the resumes of hopeful job applicants in cloud storage folders known as buckets. The buckets were provided by the world's biggest cloud service, Amazon Web Services (AWS), which stores data in servers connected to the internet.
Applicants' data was exposed when both companies set the privacy settings on their buckets to public instead of private. This error meant that the resume of someone who applied for a job could be viewed and also downloaded by anyone who knew the location of the buckets.
Authentic Jobs, whose client list includes accounting firm EY and newspaper the New York Times, made at least 221,130 resumes publicly accessible. A further 29,202 resumes were exposed by app Sonic Jobs, which international hotel chains Marriott and InterContinental often use to recruit new staff.
According to Sky News, which revealed the bucket-related breaches yesterday, the total number of resumes exposed may be higher.
After being warned of the exposure by Sky News, both companies changed their bucket settings to private.
"We take security and privacy very seriously and are looking into how this happened," Authentic Jobs said in an email.
Security researcher Gareth Llwellyn, who discovered the bucket breaches, said: "By finding and closing these buckets we can protect people who placed their trust in these businesses and—hopefully—start drawing attention to the dangers of storing personal data in a woefully insecure manner."
Authentic and Sonic will now join Verizon, Dow Jones, GoDaddy, and WWE on a growing list of organizations that have exposed data via publicly configured AWS buckets.
Llewellyn said that the onus is on companies to ensure the data that they store in the cloud is being stored safely.
"Just because they leveraged a service like AWS, or even outsourced to a third party entirely, doesn't preclude them from ensuring the data entrusted to them is safe," he said.
IoT Security focuses on protecting networks and connected devices in the Internet of Things. For the readers who are new to IoT, it is a system of connected computing devices, digital and mechanical machines, animals, people, and objects. Each aspect has a unique identifier and an ability to transfer data on the network automatically. Once these devices are on the internet, they encounter grave vulnerabilities without proper protection.
Some recent high-profile incidents have surfaced, thus making IoT security a pressing topic. Cybercriminals use traditional devices to infiltrate and attack a network. Therefore, it is crucial to implement safety standards to ensure the protection of the IoT networks and their agents.
Challenges in IoT Security
IoT security has some difficulties in establishing end-to-end protection of devices and networks. Networking appliances are relatively unfamiliar, and protection isn’t even a crucial consideration in designing products. Moreover, the infancy stage of the IoT market makes manufacturers and product designers desire to present their products to the market quickly. These people disregard security in their devices, even in the planning phase.
A primary issue in IoT security is the use of default or hardcoded passwords because it can result in security breaches, even if users change them. Not providing strong passwords can still lead to infiltration. Moreover, IoT devices have resource constraints and don’t have the necessary compute capabilities to implement robust security. For instance, temperature or humidity sensors can’t handle measures such as advanced encryption.
Furthermore, IoT devices hardly ever receive patches and updates because, from the viewpoint of the manufacturer, built-in security is costly, limits the functionality, and slows down development.
Legacy assets can’t take advantage of IoT security, and replacing the infrastructure is expensive, so experts use smart sensors retrofitted on them. However, these assets haven’t been updated and don’t have protection against modern threats. As such, an attack is very feasible.
Many systems offer limited updates, and security can lapse if the organization doesn’t provide additional support. Thus, additional protection can be challenging because various IoT devices remain in the network for extended periods.
Moreover, there are no industry-accepted criteria for IoT safety. Frameworks exist, but industry organizations and large corporations can’t agree on a single structure. Each has its specific standards, while industrial IoT has incompatible and proprietary standards. Thus, the numerous measures make it almost impossible to secure systems and ensure interoperability.
The convergence of operational technology and IT networks create various challenges for security teams. Many of the personnel have the task of ensuring end-to-end security and protecting systems outside of their expertise. The involvement of a learning curve compromises protection as IT personnel must have the appropriate skill sets to handle IoT security.
Organizations must take the necessary steps to seek a shared responsibility for security. Manufacturers, service providers, and end-users must play an important role. Prioritization of privacy and protection of devices, and default authorization and encryption, for instance, must take place. However, end-users must also accept part of the burden to ensure that they take the necessary precautions like changing passwords, using security software, and installing patches as needed.
IoT Security as an Obstacle to Technology Adoption
The security of the Internet of Things is a primary obstacle to successful technology adoption. This observation is correct even when you’re only in the early stages of deployment planning.
We look at three significant angles of this complicated issue, especially when you’re laying out the deployment of IoT sensors in your setup:
- Software security patches
- Physical device
Some sensors of the Internet of Things have many built-in computing capabilities. Therefore, these devices may not accept remote updates and patches or run a security-software agent. This problem is tremendous and worrisome because of the daily discovery of software vulnerabilities that target IoT. If there’s no capability to patch these loopholes upon detection, you have a pressing issue.
Furthermore, some devices don’t have decent security and aren’t patchable. The only way to solve the dilemma is to search for a different product that does the functional task and provides more protection.
Discovery and Networking
One of the toughest problems to solve is securing the backend and IoT sensors connections. A majority of organizations don’t even know all their devices on their network. Therefore, device discovery is critical for the network security of the Internet of Things.
A primary explanation for the lack of visibility is the operational technology of IoT. The IT staff has no sole administration of network because even line-of-business personnel can connect devices to the system. There is no protocol to inform the tech group in charge of maintaining network security. Network operations people now have an unaccustomed headache because they used to control the topology of the entire network.
Aside from the close cooperation of IT personnel with the operations staff of the business, network scanners can automatically detect devices on the system through techniques such as network traffic analysis, whitelists, and device profiles. These factors ensure proper provisioning and monitoring of device connections on the network.
Frequently, physical access is a significant and straightforward concern for traditional IT security. Data centers have strict security, and switches and routers are in locations where unauthorized people can’t access or fiddle these peripherals discreetly.
However, for the Internet of Things, well-established security practices aren’t evident. A few IoT implementations are easy to secure. A misfit can’t tinker with state-of-the-art diagnostic equipment in a secured hospital. The hacker can’t fiddle with intricate robotic manufacturing equipment in a limited access factory floor. Compromises can occur, but if a felon is still a threat even in secure locations.
Consequently, equipment around the metropolis, such as smart parking meters, traffic cameras, and noise sensors are easily accessible to the public. Soil sensors in agricultural areas and other technology in a sufficiently remote place aren’t safe either.
Diversified solutions are in place. For instance, enclosures and cases can stop a few attackers, but these things may be impractical in some situations. Video surveillance on these machines can also be a point of the intrusion. Thus, the IoT Security Foundation advocates the disabling of ports on a device. However, this recommendation isn’t necessary in some cases where there is a need for them to perform their functions. Moreover, it recommends implementing tamper-proof circuit boards and embedding these circuits in resin.
The post Prerequisites of IoT Security: Software, Network, Physical appeared first on .
The Chrome Security team values having multiple lines of defense. Web browsers are complex, and malicious web pages may try to find and exploit browser bugs to steal data. Additional lines of defense, like sandboxes, make it harder for attackers to access your computer, even if bugs in the browser are exploited. With Site Isolation, Chrome has gained a new line of defense that helps protect your accounts on the Web as well.
Site Isolation ensures that pages from different sites end up in different sandboxed processes in the browser. Chrome can thus limit the entire process to accessing data from only one site, making it harder for an attacker to steal cross-site data. We started isolating all sites for desktop users back in Chrome 67, and now we’re excited to enable it on Android for sites that users log into in Chrome 77. We've also strengthened Site Isolation on desktop to help defend against even fully compromised processes.
Site Isolation helps defend against two types of threats. First, attackers may try to use advanced "side channel" attacks to leak sensitive data from a process through unexpected means. For example, Spectre attacks take advantage of CPU performance features to access data that should be off limits. With Site Isolation, it is harder for the attacker to get cross-site data into their process in the first place.
Second, even more powerful attackers may discover security bugs in the browser, allowing them to completely hijack the sandboxed process. On desktop platforms, Site Isolation can now catch these misbehaving processes and limit their access to cross-site data. We're working to bring this level of hijacked process protection to Android in the future as well.
Thanks to this extra line of defense, Chrome can now help keep your web accounts even more secure. We are still making improvements to get the full benefits of Site Isolation, but this change gives Chrome a solid foundation for protecting your data.
If you’d like to learn more, check out our technical write up on the Chromium blog.
In Part One of this blog series, Steve Miller outlined what PDB paths are, how they appear in malware, how we use them to detect malicious files, and how we sometimes use them to make associations about groups and actors.
As Steve continued his research into PDB paths, we became interested in applying more general statistical analysis. The PDB path as an artifact poses an intriguing use case for a couple of reasons.
First, the PDB artifact is not directly tied to the functionality of the binary. As a byproduct of the compilation process, it contains information about the development environment, and by proxy, the malware author themselves. Rarely do we encounter static malware features with such an interesting tie to the human behind the keyboard, rather than the functionality of the file.
Second, file paths are an incredibly complex artifact with many different possible encodings. We had personally been dying to find an excuse to spend more time figuring out how to parse and encode paths in a more useful way. This presented an opportunity to dive into this space and test different approaches to representing file paths in various models.
The objectives of our project were:
- Build a large data set of PDB paths and apply some statistical methods to find potentially new signature terms and logic.
- Investigate whether applying machine learning classification approaches to this problem could improve our detection above writing hand-crafted signatures.
- Build a PDB classifier as a weak signal for binary analysis.
To start, we began gathering data. Our dataset, pulled from internal and external sources, started with over 200,000 samples. Once we deduplicated by PDB path, we had around 50,000 samples. Next, we needed to consistently label these samples, so we considered various labeling schemes.
Labeling Binaries With PDB Paths
For many of the binaries we had internal FireEye labels, and for others we looked up hashes on VirusTotal (VT) to have a look at their detection rates. This covered the majority of our samples. For a relatively small subset we had disagreements between our internal engine and VT results, which merited a slightly more nuanced policy. The disagreement was most often that our internal assessment determined a file to be benign, but the VT results showed a nonzero percentage of vendors detecting the file as malicious. In these cases we plotted the ‘VT ratio”: that is, the percentage of vendors labeling the files as malicious (Figure 1).
Figure 1: Ratio of vendors calling file bad/total number of vendors
The vast majority of these samples had VT detection ratios below 0.3, and in those cases we labeled the binaries as benign. For the remainder of samples we tried two strategies – marking them all as malicious, or removing them from the training set entirely. Our classification performance did not change much between these two policies, so in the end we scrapped the remainder of the samples to reduce label noise.
Next, we had to start building features. This is where the fun began. Looking at dozens and dozens of PDB paths, we simply started recording various things that ‘pop out’ to an analyst. As noted earlier, a file path contains tons of implicit information, beyond simply being a string-based artifact. Some analogies we have found useful is that a file path is more akin to a geographical location in its representation of a location on the file system, or like a sentence in that it reflects a series of dependent items.
To further illustrate this point, consider a simple file path such as:
C:\Users\World\Desktop\duck\Zbw138ht2aeja2.pdb (source file)
This path tells us several things:
- This software was compiled on the system drive of the computer
- In a user profile, under user ‘World’
- The project is managed on the Desktop, in a folder called ‘duck’
- The filename has a high degree of entropy and is not very easy to remember
In contrast, consider something such as:
D:\VSCORE5\BUILD\VSCore\release\EntVUtil.pdb (source file)
- Compilation on an external or secondary drive
- Within a non-user directory
- Contains development terms such as ‘BUILD’ and ‘release’
- With a sensible, semi-memorable file name
These differences seem relatively straightforward and make intuitive sense as to why one might be representative of malware development whereas the other represents a more “legitimate-looking” development environment.
How do we represent these differences to a model? The easiest and most obvious option is to calculate some statistics on each path. Features such as folder depth, path length, entropy, and counting things such as numbers, letters, and special characters in the PDB filename are easy to compute.
However, upon evaluation against our dataset, these features did not help to separate the classes very well. The following are some graphics detailing the distributions of these features between our classes of malicious and benign samples:
While there is potentially some separation between benign and malicious distributions, these features alone would likely not lead to an effective classifier (we tried). Additionally, we couldn’t easily translate these differences into explicit detection rules. There was more information in the paths that we needed to extract, so we began to look at how to encode the directory names themselves.
As with any dataset, we had to undertake some steps to normalize the paths. For example, the occurrence of individual usernames, while perhaps interesting from an intelligence perspective, would be represented as distinct entities when in fact they have the same semantic meaning. Thus, we had to detect and replace usernames with <username> to normalize this representation. Other folder idiosyncrasies such as version numbers or randomly generated directories could similarly be normalized into <version> or <random>.
A typical normalized path might therefore go from this:
C:\Users\jsmith\Documents\Visual Studio 2013\Projects\mkzyu91952\mkzyu91952\obj\x86\Debug\mkzyu91952.pdb
c:\users\<username>\documents\visual studio 2013\projects\<random>\<random>\obj\x86\debug\mkzyu91952.pdb
You may notice that the PDB filename itself was not normalized. In this case we wanted to derive features from the filename itself, so we left it. Other approaches could be to normalize it, or even to make note that the same filename string ‘mkzyu91952’ appears earlier in the path. There are endless possible features when dealing with file paths.
Once we had normalized directories, we could start to “tokenize” each directory term, to start performing some statistical analysis. Our main goal of this analysis was to see if there were any directory terms that highly corresponded to maliciousness, or see if there were any simple combinations, such as pairs or triplets, that exhibited similar behavior.
We did not find any single directory name that easily separated the classes. That would be too easy. However, we did find some general correlations with directories such as “Desktop” being somewhat more likely to be malicious, and use of shared drives such as Z: to be more indicative of a benign file. This makes intuitive sense given the more collaborative environment a “legitimate” software development process might require. There are, of course, many exceptions and this is what makes the problem tricky.
Another strong signal we found, at least in our dataset, is that when the word “Desktop” was in a non-English language and particularly in a different alphabet, the likelihood of that PDB path being tied to a malicious file was very high (Figure 2). While potentially useful, this can be indicative of geographical bias in our dataset, and further research would need to be done to see if this type of signature would generalize.
Figure 2: Unicode desktop folders from malicious samples
Various Tokenizing Schemes
In recording the directories of a file path, there are several ways you can represent the path. Let’s use this path to illustrate these different approaches:
Bag of Words
One very simple way is the “bag-of-words” approach, which simply treats the path as the distinct set of directory names it contains. Therefore, the aforementioned path would be represented as:
Another approach we considered was recording the position of each directory name, as a distance from the drive. This retained more information about depth, such that a ‘build’ directory on the desktop would be treated differently than a ‘build’ directory nine directories further down. For this purpose, we excluded the drives since they would always have the same depth.
Finally, we explored breaking paths into n-grams; that is, as a distinct set of n- adjacent directories. For example, a 2-gram representation of this path might look like:
We tested each of these approaches and while positional analysis and n-grams contained more information, in the end, bag-of-words seemed to generalize best. Additionally, using the bag-of-words approach made it easier to extract simple signature logic from the resultant models, as will be shown in a later section.
Since we had the bag-of-words vectors created for each path, we were also able to evaluate term co-occurrence across benign and malicious files. When we evaluated the co-occurrence of pairs of terms, we found some other interesting pairings that indeed paint two very different pictures of development environments (Figure 3).
Correlated with Malicious Files
Correlated with Benign Files
documents, visual studio 2012
local, temporary projects
appdata, temporary projects
Figure 3: Correlated pairs with malicious and benign files
Our bag-of-words representation of the PDB paths then gave us a distinct set of nearly 70,000 distinct terms. The vast majority of these terms occurred once or twice in the entire dataset, resulting in what is known as a ‘long-tailed’ distribution. Figure 4 is a graph of only the top 100 most common terms in descending order.
Figure 4: Long tailed distribution of term occurrence
As you can see, the counts drop off quickly, and you are left dealing with an enormous amount of terms that may only appear a handful of times. One very simple way to solve this problem, without losing a ton of information, is to simply cut off a keyword list after a certain number of entries. For example, take the top 50 occurring folder names (across both good and bad files), and save them as a keyword list. Then match this list against every path in the dataset. To create features, one-hot encode each match.
Rather than arbitrarily setting a cutoff, we wanted to know a bit more about the distribution and understand where might be a good place to set a limit – such that we would cover enough of the samples without drastically increasing the number of features for our model. We therefore calculated the cumulative number of samples covered by each term, as we iterated down the list from most common to least common. Figure 5 is a graph showing the result.
Figure 5: Cumulative share of samples covered by distinct terms
As you can see, with only a small fraction of the terms, we can arrive at a significant percentage of the cumulative total PDB paths. Setting a simple cutoff at about 70% of the dataset resulted in roughly 230 terms for our total vocabulary. This gave us enough information about the dataset without blowing up our model with too many features (and therefore, dimensions). One-hot encoding the presence of these terms was then the final step in featurizing the directory names present in the paths.
YARA Signatures Do Grow on Trees
Armed with some statistical features, as well as one-hot encoded keyword matches, we began to train some models on our now-featurized dataset. In doing so, we hoped to use the model training and evaluation process to give us insights into how to build better signatures. If we developed an effective classification model, that would be an added benefit.
We felt that tree-based models made sense for this use case for two reasons. First, tree-based models have worked well in the past in domains requiring a certain amount of interpretability and using a blend of quantitative and categorical features. Second, the features we used are largely things we could represent in a YARA signature. Therefore, if our models built boolean logic branches that separated large numbers of PDB files, we could potentially translate these into signatures. This is not to say that other model families could not be used to build strong classifiers. Many other options ranging from Logistic Regression to Deep Learning could be considered.
We fed our featurized training set into a Decision Tree, having set a couple ‘hyperparameters’ such as max depth and minimum samples per leaf, etc. We were also able to use a sliding scale of these hyperparameters to dynamically create trees and, essentially, see what shook out. Examining a trained decision tree such as the one in Figure 6 allowed us to immediately build new signatures.
Figure 6: Example decision tree and decision paths
We found several other interesting tidbits within our decision trees. Some terms that resulted in completely or almost-completely malicious subgroups are:
We also found the term ‘WindowsApplication1’ to be quite useful. 89% of the files in our dataset containing this directory were malicious. Cursory research indicates that this is the default directory generated when using Visual Studio to compile a Windows binary. Once again, this makes some intuitive sense for finding malware authors. Training and evaluating decision trees with various parameters turned out to be a hugely productive exercise in discovering potential new signature terms and logic.
Classification Accuracy and Findings
Since we now had a large dataset of PDB paths and features, we wanted to see if we could train a traditional classifier to separate good files from bad. Using a Random Forest with some tuning, we were able to achieve an average accuracy of 87% over 10 cross validations. However, while our recall (the percentage of bad things we could identify with the model) was relatively high at 89%, our malware precision (the share of those things we called bad that were actually bad) was far too low, hovering at or below 50%. This indicates that using this model alone for malware detection would result in an unacceptably large number of false positives, were we to deploy it in the wild as a singular detection platform. However, used in conjunction with other tools, this could be a useful weak signal to assist with analysis.
Conclusion and Next Steps
While our journey of statistical PDB analysis did not yield a magic malware classifier, it did yield a number of useful findings that we were hoping for:
- We developed several file path feature functions which are transferable to other models under development.
- By diving into statistical analysis of the dataset, we were able to identify new keywords and logic branches to include in YARA signatures. These signatures have since been deployed and discovered new malware samples.
- We answered a number of our own general research questions about PDB paths, and were able to dispel some theories we had not fully tested with data.
While building an independent classifier was not the primary goal, improvements can surely be made to improve the end model accuracy. Generating an even larger, more diverse dataset would likely make the biggest impact on our accuracy, recall, and precision. Further hyperparameter tuning and feature engineering could also help. There is a large amount of established research into text classification using various deep learning methods such as LSTMs, which could be applied effectively to a larger dataset.
PDB paths are only one small family of file paths that we encounter in the field of cyber security. Whether in initial infection, staging, or another part of the attack lifecycle, the file paths found during forensic analysis can reveal incredibly useful information about adversary activity. We look forward to further community research on how to properly extract and represent that information.
A critical flaw in Aironet access points (APs) can be exploited by a remote attacker to gain unauthorized access to vulnerable devices.
Cisco disclosed a critical vulnerability in Aironet access points (APs)
Cisco has already released software updates that address the flaw, the company pointed out that there are no workarounds that fix this vulnerability.
The flaw is caused by insufficient access control for some URLs, an attacker could exploit the flaw by simply requesting the unprotected URLs.
“The vulnerability is due to insufficient access control for certain URLs on an affected device. An attacker could exploit this vulnerability by requesting specific URLs from an affected AP. An exploit could allow the attacker to gain access to the device with elevated privileges.” reads the security advisory published by Cisco.
The vulnerability affects Aironet 1540, 1560, 1800, 2800, 3800 and 4800 series APs. Cisco released versions 126.96.36.199, 188.8.131.52 and 184.108.40.206 to address the vulnerability.
Cisco revealed that there is no evidence of attacks exploiting the flaw in the wild.
Aironet APs are also affected by two high-severity flaws that can be exploited by an
“A vulnerability in the Point-to-Point Tunneling Protocol (PPTP) VPN packet processing functionality in Cisco Aironet Access Points (APs) could allow an unauthenticated, remote attacker to cause an affected device to reload, resulting in a denial of service (
The second flaw, tracked as CVE-2019-15264, while the other resides in the Control and Provisioning of Wireless Access Points (CAPWAP) protocol.
“A vulnerability in the Control and Provisioning of Wireless Access Points (CAPWAP) protocol implementation of Cisco Aironet and Catalyst 9100 Access Points (APs) could allow an
“The vulnerability is due to improper resource management during CAPWAP message processing. An attacker could exploit this vulnerability by sending a high volume of legitimate wireless management frames within a short time
(SecurityAffairs – Cisco Aironet APs, zero-day)
The post Critical and high-severity flaws addressed in Cisco Aironet APs appeared first on Security Affairs.
Plenty of headlines are warning about anyone’s fingerprint being able to unlock a Samsung Galaxy S10, but I’m not sure it’s quite as simple as that…
I remember talking with my marketing team in the summer of 2014 about IoT (Internet of Things) vs. IoE (Internet of Everything). Cisco had come out with IoE hoping to take the thought leadership lead from GE on the transformation known today as the Internet of Things. I was reminiscing on this during a recent lunch time run and asked myself the question I try to ask daily – “What have I been surprised to learn?”
For me learning surprises usually come in three categories. First, I am surprised because I have just never thought about the subject that way. As a physicist this was the experience when I first learned of Einstein’s Simple Theory of Relativity. Basically, a different Point-of-View on the matter can give one a whole different perspective on how it works. A pragmatic, everyday example of a POV learning is rather than just worrying about the cost-of-a-service visit, one considers the cost-of-uncertainty of the visit – of not knowing when the expense will occur.
Late last month Google Project Zero researcher Maddie Stone detailed a zero-day Android privilege escalation vulnerability (CVE-2019-2215) and revealed that it is actively being exploited in attacks in the wild. She also provided PoC code that could help researchers check which Android-based devices are vulnerable and which are not. One of those has decided to go further. Achieving “root” through a malicious app “The base PoC left us with a full kernel read/write primitive, essentially … More
The post Researcher releases PoC rooting app that exploits recent Android zero-day appeared first on Help Net Security.
Everything we do on a daily basis has some form of “trust” baked into it. Where you live, what kind of car you drive, where you send your children to school, who you consider good friends, what businesses you purchase from, etc. Trust instills a level of confidence that your risk is minimized and acceptable to you. Why should this philosophy be any different when the entity you need to trust is on the other end of an Internet address? In fact, because you are connecting to an entity that you cannot see or validate, a higher level of scrutiny is required before they earn your trust. What Universal Resource Locator (URL) are you really connecting to? Is it really your banking website or new online shopping website that you are trying for the first time? How can you tell?
It’s a jungle out there. So we’ve put together five ways you can stay safe while you shop online:
Shop at sites you trust. Are you looking at a nationally or globally recognized brand? Do you have detailed insight into what the site looks like? Have you established an account on this site, and is there a history that you can track for when you visit and what you buy? Have you linked the valid URL for the site in your browser? Mistyping a URL in your browser for any site you routinely visit can lead you to a rogue website.
Use secure networks to connect. Just as important as paying attention to what you connect to is to be wary of where you connect from. Your home Wi-Fi network that you trust—okay. An open Wi-Fi at an airport, cyber café, or public kiosk—not okay. If you can’t trust the network, do not enter identifying information or your payment card information. Just ask our cybersecurity services experts to demonstrate how easy it is to compromise an open Wi-Fi network, and you’ll see why we recommend against public Wi-Fi for sensitive transactions.
Perform basic checks in your browser. Today’s modern browsers are much better at encrypted and secure connections than they were a few years ago. They use encrypted communication by leveraging a specific Internet protocol, hypertext transfer protocol secure (HTTPS). This means that there is a certificate associated with this site in your browser that is verified before you are allowed to connect and establish the encrypted channel. (Just so you know, yes, these certificates can be spoofed, but that is a problem for another day). How do you check for this certificate?
Look up in your browser title bar.
Create strong password for your shopping sites. This issue is covered in another blog post, but use longer passwords, 10–12 characters, and keep them in a safe place that cannot be compromised by an unauthorized person. If a second factor is offered, use it. Many sites will send you a code to your smartphone to type into a login screen to verify you are who you say you are.
Don’t give out information about yourself that seems unreasonable. If you are being asked for your social security number, think long and hard, and then longer and harder, about why that information should be required. And then don’t do it until you ask a trusted source about why that would be necessary. Be wary of anything you see when you are on a website that does not look familiar or normal.
It will display the URL you are connecting to.
Hover over and click on the lock icon
Note that the information says the certificate is valid. But let’s verify that. Hover over and click on the certificate icon.
Certificate is issued to Amazon from a valid Certificate Authority and is valid until 12/15/2019. Excellent.
We all use the Internet to shop. It is super convenient, and the return on investment is awesome. Having that new cool thing purchased in 10 minutes and delivered directly to your door—wow! Can you ever really be 100% sure that the Internet site you are visiting is legitimate, and that you are not going to inadvertently give away sensitive and/or financial information that is actually going directly into a hacker’s data collection file? Unfortunately, no. A lot of today’s scammers are very sophisticated. But as we discussed up front, this is a trust- and risk-based decision, and if you are aware that you could be compromised at any time on the Internet and are keeping your eyes open for things that just don’t look right or familiar, you have a higher probability of a safe online shopping experience.
- Visit and use sites you know and trust
- Keep the correct URLs in your bookmarks (don’t risk mistyping a URL).
- Check the certificate to ensure your connection to the site is secured by a legitimate and active certificate.
- Look for anything that is not familiar to your known experience with the site.
- If you can, do not save credit card or payment card information on the site. (If you do, you need to be aware that if that site is breached, your payment data is compromised.)
- Use strong passwords for your shopping site accounts. And use a different password for every site. (No one ring to rule them all!)
- If a site offers a second factor to authenticate you, use it.
- Check all your payment card statements regularly to look for rogue purchases.
- Subscribe to an identity theft protection service if you can. These services will alert you if your identity has been compromised.
The United States Department of Justice announced the arrest of hundreds of criminals as part of a global operation against a dark web child abuse community.
The US Department of Justice announced the arrest of hundreds of criminals as part of a global operation conducted against the crime community operating the largest dark web child porn site, ‘Welcome to Video’.
The operation involved law enforcement agencies from several countries, including the IRS-CI, the US Homeland Security Investigations, the NCA, the Korean National Police of the Republic of Korea, and German Federal Criminal Police (the Bundeskriminalamt),
Officials have arrested the administrator of the site, Jong Woo Son of South Korea (23), along with 337 suspects in 38 countries that have been charged for allegedly being users of the site.
Two former federal law enforcement officials were allegedly involved in the child porn site, Paul Casey Whipple and Richard Nikolai Gratkowski.
The US authorities issued a warrant for Son’s arrest on February 2018, and
According to the indictment, the ‘Welcome to Video’ child abuse site was launched in June 2015 and operated until March 2018. The site received at least 420 BTC in three years through at least 7300 transactions.
Experts from the National Center for Missing and Exploited Children (NCMEC) are currently analyzing over 250,000 unique videos hosted on the website, 45 percent of them contain new images that have not been previously known to exist.
“According to the indictment, on March 5, 2018, agents from the IRS-CI, HSI, National Crime Agency in the United Kingdom, and Korean National Police in South Korea arrested Son and seized the server that he used to operate a Darknet market that exclusively advertised child sexual exploitation videos available for download by members of the site.” reads a press release published by the DoJ. “The operation resulted in the seizure of approximately eight terabytes of child sexual exploitation videos, which is one of the largest seizures of its kind.”
The great news is that the operation allowed to rescue tens of children living in the United States, Spain, and the United Kingdom.
According to the indictment, the law enforcement experts discovered the Child abuse website was hosted on the IP address 220.127.116.11 and 18.104.22.168 that was registered
Experts also identified more than one million unique
“Welcome To Video offered these videos for sale using the
Though Son is currently serving an 18-month sentence in South Korea, a federal grand jury in Washington DC unsealed a 9-count indictment against him just yesterday, with the U.S. authorities seeking his extradition to face justice.
(SecurityAffairs – Child abuse, cybercrime)
The post International operation dismantled largest Dark Web Child abuse site appeared first on Security Affairs.
If you are leading a business or work within a business, this guide is definitely for you.
You have probably come across the term legacy software or legacy systems but don’t know exactly what they are. Or, even more likely, you are using legacy software or systems without even knowing it.
But there are risks and challenges associated with this (somewhat unavoidable) business practice.
Below I will explain everything there is to know about making the most you can out of legacy software and systems. As I said, it’s highly probable for a business older than a year to be using at least some tools which can be labeled as legacy tier.
First thing’s first, though: let’s start by exploring what is legacy software and what are legacy systems. Typically, any medium to large company nowadays has at least a few legacy elements in its IT environment.
Next, we’ll move along to tips that can help you identify whether your legacy software or legacy system is one of the risky ones or not.
What Is Legacy Software? Definition(s)
To put it in as few words as possible, legacy software is any piece of software that can’t receive continued patching or support from its developer, or can’t meet the compliance standards in use.
The examples of enterprise-level legacy software can be quite different.
Here are just a few cases which can be labeled as legacy software, to get a better idea of what it can encompass:
- A major platform with no functional replacement (yet), still supported and compatible with other IT assets, but which does not receive security updates anymore;
- An older piece of software that is still in use and receives support, but its creators are announcing the transition of support to the newer version of the product (such as the case of Python 2 vs Python 3);
- A piece of software or platform which still gets updates but only for features (not security patches);
- A piece of software or platform which still gets security updates and support but is no longer compliant with recent standards;
- A piece of software or platform which gets updates and support but is not compatible with the newer systems and drivers in use (thus stalling the company’s adoption of those);
In some cases, the category of legacy software can include consumer-oriented software products issued by companies that no longer exist.
But, in spite of the discontinued support – and the discontinued official listing of that software (there’s nowhere to officially buy or download it from) – some users continue to procure it out of nostalgia.
Such is the case of Winamp media player, for example. There are entire Reddit forums dedicated to Winamp nostalgia, along with users still sharing custom made Winamp ‘skins’. There is also newly issued software that can emulate Winamp in-browser. So, the power of nostalgia for legacy software can still make the world go ‘round.
What Is a Legacy System in a Computer Industry Context?
A legacy system is a platform or hub or operating system (something which facilitates digital operations but is one level above software) which is outdated.
This state of being outdated can refer to the fact that the system is either lacking the possibility of support, or to its compatibility with other IT system elements, or to its level of compliance, or to the updates it receives.
Myth Busting Legacy Software and Legacy Systems
Here are the most common misconceptions about legacy software and legacy systems.
#1. Legacy software is useless
While legacy software and legacy systems still pose risks (which I’ll dive into below), it doesn’t mean they outlived their usefulness completely.
In many cases, a piece of legacy software or a legacy system is still in use precisely because it is the most comfortable option. Either there is no exact functional replacement yet, or the transition is still too difficult to weather.
Regardless of the exact reason, companies continue to use legacy software precisely because it’s still useful.
Ideally, yes, people should try to move on from legacy software as soon as it’s feasible, but things are always a bit more complicated in practice.
#2. Legacy software is free
The opposite can be true: precisely because legacy software was quite an investment, companies may be reluctant to replace it yet. An investment only makes sense if the cost is recovered over a pre-determined use period.
In many cases, even subscription-based software and systems (which the company is still actively paying for) are in fact legacy ones. The recurrent fee ensures continuous support and perhaps even some feature updates, but the security patches are unsatisfactory, or the software is not compliant.
#3. Legacy software is unsupported
As mentioned above and through the examples so far, there are cases when legacy software is still supported by an actual team and you can still get an account manager to troubleshoot stuff for you.
Regardless, no matter how active and involved the support team behind the software is, if it doesn’t get security updates or is uncompliant, it still counts as legacy software.
#4. Legacy software is dangerous
I know that most sources you will consult about legacy software will seem to push you to replace it ASAP, on account of it being dangerous.
But the truth is that legacy software is not always dangerous. It depends a great deal on the specifics of the case.
I’ll get into more detail on how to mitigate the risks of legacy software below.
#5. Legacy software and legacy systems should be immediately replaced
Just as legacy software is not always dangerous, so it does not always need to be replaced. It depends on the case and its specifics. Not only of the software but also of the company and its way of operating.
Before you decide whether a particular piece of legacy software needs to get replaced, you should do a case analysis. Applying software updates is a major hassle anyway for company IT admins unless they are already using a smart patching automatization software. No need to make that job even harder.
The Risks of Running Legacy Software and Systems in Your Business
Still, even if legacy software is not always dangerous, there are cases when it can definitely pose some risks.
Here are a few examples of such risks deriving from legacy software or legacy systems:
- The risk of falling prey to a data breach or cyber-attack more easily;
- The risk of slowing down the activity due to the performance issues or the need to manually fix issues regularly;
- The risk of becoming non-compliant;
How to Mitigate the Risk of Legacy Software: 3 Ways
In order to avoid these three main types of risk deriving from legacy software or legacy systems, you just need to be proactive about it. Don’t wait until you are already facing a productivity crisis or, worse yet, a security breach.
The main ways to go about it are these:
#1. Consult with security experts about the legacy software elements in your IT environment
As mentioned above, it’s not always dangerous to stick to your legacy software or legacy system already in use. Sometimes the switch can involve costs that are not justified by the amount of risk you need to absorb. So, in some cases, it makes perfect sense to stick with the legacy software (and that’s exactly what some companies do).
Check with security experts to see what software absolutely poses a risk and what legacy software you can afford to continue using. Also, implement an automatic software patching solution in order to close potential security gaps and to make the life of your sys-admins easier.
#2. Do a case by case comparison between your legacy software and alternatives
Sometimes, the bad news is that legacy software that really needs to be replaced does not have a viable replacement yet.
But when it does, look into it just as you would look into any other business decision, with pros and cons. When you consider the (explicit) costs of updating, also consider the (implicit) costs of not updating. Is a potential breach easier to come back from than absorbing the costs of a change?
#3. Analyze the impediments to a transition from legacy software to non-legacy software
Also, in each case analysis, consider all the other variables and effort required for a transition. Compatibility and cost concerns are valid, but internal effort and time should not weigh so much in the final decision. Just because it will be a bit of a hassle doesn’t mean you should postpone indefinitely. That’s what gets companies on the breach list in most cases.
This concludes today’s guide on legacy software and legacy systems. If you have any questions or stories to share, feel free to comment below or contact me. I’m here to help if I can.
The post What Is Legacy Software and a Legacy System in Business + The Risks appeared first on Heimdal Security Blog.
Researchers discovered a new cryptojacking worm called “Graboid” that has spread to more than 2,000 unsecured Docker hosts. In its research, Palo Alto Networks’ Unit 42 team noted that it’s the first time it’s discovered a cryptojacking worm specifically using containers in the Docker Engine for distribution. (It’s not the first time that cryptojacking malware […]… Read More
The post Graboid Cryptojacking Worm Has Struck Over 2K Unsecured Docker Hosts appeared first on The State of Security.
This is interesting research:
In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That's unfortunately not very hard to do, since BGP itself doesn't have any security procedures for validating that a message is actually coming from the place it says it's coming from.
To better pinpoint serial attacks, the group first pulled data from several years' worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.
The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:
- Volatile changes in activity: Hijackers' address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network's prefix was under 50 days, compared to almost two years for legitimate networks.
- Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as "network prefixes."
- IP addresses in multiple countries: Most networks don't have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.
Note that this is much more likely to detect criminal attacks than nation-state activities. But it's still good work.
Eighteen technology companies have formed the Open Cybersecurity Alliance to foster the development of open source tools to improve interoperability and data sharing between cybersecurity applications. But some observers say getting all players to agree on a common platform will be challenging.
Rogue Mobile App Fraud Soars 191% in 2019
Global fraud attacks soared by 63% from the second half of 2018 to the first six months of this year, with fake mobile applications a growing source of malicious activity, according to RSA Security.
The firm’s Quarterly Fraud Report for Q2 2019 is a useful snapshot of current trends based on detections by the vendor.
Phishing, including vishing and smishing, continues to be the biggest source of fraud — representing over a third (37%) of attacks in Q2, with attacks climbing 6% from 2H 2018 to 1H 2019.
Canada, Spain and India were the top three countries targeted by phishing, accounting for 61% of total attack volume.
However, it is attacks via rogue mobile applications that present the fastest-growing threat, soaring 191% over the same period. These attacks, which involve the spoofing of brands to trick users, now account for 29% of the total.
Elsewhere, there were also significant increases in detections of financial malware (up 80%) and social media attacks (37%).
In the e-commerce space, RSA noted that 57% of fraud transaction value in Q2 2019 came from a new device but trusted account. In online banking 88% of payment fraud attempts originated from the same combination: trusted account and new device. That is a significant increase from Q1 figures of just 20%.
This highlights the continuing popularity of account takeovers as a highly successful threat vector, RSA said.
Daniel Cohen, director of the Fraud and Risk Intelligence Unit at RSA Security, argued that digital transformation is introducing new risks that organizations must manage.
“From one-click payment buttons to mobile apps from our favorite retailers, spending our money has never been easier. However, while the growth of digital might be good for our busy schedules, it has also opened up numerous new avenues for fraudsters,” he added.
“The fact that fraud via fake mobile applications tripled in the first half of 2019 is testament to how perpetrators will constantly seek out weak points by exploiting consumers’ growing trust in mobile apps.”
Banks need to layer up protection, while consumers must play their part by understanding the tell-tale signs of phishing and taking time out to verify application publishers before downloading, Cohen advised.
As our children venture into toddlerhood, they start to test us a bit. They tug at the tethers we create for them to see just how far they can push us. As they grow and learn, they begin to carve out a vision of the world for themselves—with your guidance, of course, so that they can learn how to live a safe and happy life both now and as they get older.
This is true in the digital world as well.
Typically, at around age two, our kids get their first taste of playing on mommy’s or daddy’s smartphone or tablet and discover an awesome new world of devices and online activities. It’s slow at first—a couple minutes here and there—but, over time, they spend more and more of their day online. You have an opportunity when your child has their first experience with a connected device to set the tone for what’s expected. This is a deliberate teaching moment, the first of many, where you explain how to go safely online and continue to reinforce these behaviors as they grow.
This chapter of “Is Your Digital Front Door Unlocked?” lays out several topics that, if done in healthy and constructive way, will make your child’s digital journey much more enjoyable. Topics such as the importance of rules, online etiquette, and the notion of “the talk” as it relates to going online safely are discussed in detail, in the hope of providing a framework that will grow as your child grows.
It also looks at challenges that every parent should be aware of, such as cyberbullying and the impact of screen time on your child. It also introduces the risks associated with online gaming for those just getting started.
I can’t express strongly enough the importance of engagement with your child during the formative years. This chapter will give you plenty of ideas of how to go about it in a way that both you and your child will enjoy.
Gary Davis’ book, Is Your Digital Front Door Unlocked?, is available September 5, 2019 and can be ordered at amazon.com.
The post Chapter Preview: Ages 2 to 10 – The Formative Years appeared first on McAfee Blogs.
Cisco has released another batch of security updates, the most critical of which fixes a vulnerability that could allow unauthenticated, remote attackers to gain access to vulnerable Cisco Aironet wireless access points. Cisco Aironet APs are enterprise-grade access points used for branch offices, campuses, organizations of all sizes, enterprise and carrier-operator Wi-Fi deployments, and so on. Cisco Aironet vulnerabilities During the resolution of a Cisco TAC support case, the company’s technicians discovered a number of … More
The post Cisco fixes serious flaws in enterprise-grade Catalyst and Aironet access points appeared first on Help Net Security.
Security experts at Palo Alto Networks discovered a
worm dubbed Graboid that spreads using Docker containers.
Experts discovered that to target new systems, the Graboid worm periodically queries the C&C for vulnerable hosts, in this way the malicious code is instructed about the next target to infect.
“Unit 42 researchers identified a new
Graboid is the first-ever Cryptojacking worm found in images on Docker Hub, the analysis conducted by the experts shows that, on average, each miner is active 63% of the time, with the mining periods being
Palo Alto Networks found over 2,000 Docker engines unsecured online, this means that threat actors could to take full control of them and abuse their resources for malicious purposes.
The hackers first compromise an unsecured Docker daemon, then they ran the malicious container from Docker Hub, it fetches scripts and a list of vulnerable hosts from the C&C, and spread targeting the host in the list.
‘Graboid’ implements both worm-spreading and
“Essentially, the miner on every infected host is randomly controlled by all other infected hosts. The motivation for this randomized design is unclear. It can be a bad design, an evasion technique (not very effective), a self-sustaining system or some other purposes.” continues the analysis.
Experts reported that the malicious Docker image (
“While this cryptojacking worm doesn’t involve sophisticated tactics, techniques, or procedures, the worm can periodically pull new scripts from the C2s, so it can easily repurpose itself to ransomware or any malware to fully compromise the hosts down the line and shouldn’t be ignored.” concludes the analysis. “If a more potent worm is ever created to take a similar infiltration approach, it could cause much greater damage, so it’s imperative for organizations to safeguard their Docker hosts.”
The post Graboid the first-ever Cryptojacking worm that targets Docker Hub appeared first on Security Affairs.
World’s Largest Child Exploitation Site Shut After Bitcoin Analysis
Global investigators have traced Bitcoin payments to locate and shutdown the dark web’s largest child exploitation website, arrest hundreds of users and rescue dozens of abused children, according to unsealed court documents.
On March 5 2018, agents from Homeland Security Investigations (HIS), Internal Revenue Service, Criminal Investigation (IRS-CI), the UK’s National Crime Agency (NCA) and Korean National Police arrested Jong Woo Son, 23, for operating the Welcome to Video site, according to the indictment.
The raid led to the seizure of round 8TB of child exploitation videos, and the arrest of over 300 alleged users of the site, believed to be the largest of its kind in terms of material stored. They hailed from the US, UK, South Korea, Germany, Saudi Arabia, the United Arab Emirates, the Czech Republic, Canada, Ireland, Spain, Brazil and Australia, and have all been charged.
Some 23 children were also rescued from abuse by users of the site in the US, UK and Spain.
The vital intelligence behind the successful operation was generated by technology which enabled investigators to trace Bitcoin payments made by users of the site — each of whom had a unique cryptocurrency address assigned on registering an account, in order to buy videos.
The site is said to have had capacity for at least one million such addresses.
Investigators used a product known as Chainalysis Reactor to analyze the flow of digital funds to and from the site, via Bitcoin exchanges.
“Because exchanges typically perform Know Your Customer (KYC) processes, many were able to provide copies of identification, addresses, and other relevant transactions associated with those accounts,” explained Chainalysis.
“While in many cases the information supplied by the exchanges was enough to identify WTV users, in other cases IRS-CI was able to combine the account information with open source intelligence and standard investigative techniques to identify users.”
The firm was also able to break down regionally-specific information for investigators to enable global arrests, it said.
Son is already serving time in South Korea where he was convicted of charges relating to the dark web site.
ESET Smart Home Research Team uncovers Echo, Kindle versions vulnerable to 2017 Wi-Fi vulnerabilities
The post What was wrong with Alexa? How Amazon Echo and Kindle got KRACKed appeared first on WeLiveSecurity
ESET researchers describe recent activity of the infamous espionage group, the Dukes, including three new malware families
The post Operation Ghost: The Dukes aren’t back – they never left appeared first on WeLiveSecurity
US Ordered Secret Cyber-Strike on Iran: Report
The US ordered a secret cyber-attack on Iranian IT systems in response to the alleged Tehran-backed September 14 attacks on Saudi Arabian oil facilities, according to a new report.
Two anonymous US officials told Reuters that the attacks were targeted at Iranian hardware in an operation focused on limiting the Islamic Republic’s ability to spread propaganda.
There are few other publicly available details about the raid, although it appears to have been a much smaller-scale and less sophisticated effort than the infamous Stuxnet operation which disrupted Iran’s nuclear program almost a decade ago.
It would make sense though, given President Trump’s reluctance to get embroiled in a full-scale conflict with the country. He is reported to have called off air strikes on Iranian facilities following the June downing of a US Navy drone, for fear of escalating the stand-off.
Dave Palmer, director of technology at Darktrace, argued that nation states are increasingly turning to cyber-strikes to launch attacks on physical hardware, making it more important than ever that such infrastructure is well protected.
“We have entered a new age of cyber warfare, where sophisticated groups are using advanced software that is capable of going under the radar of traditional security controls, plants itself in the heart of critical systems and uses that knowledge to its advantage,” he said.
“Relying on human security teams will not be enough to resist attackers that are backed by nation states and therefore highly sophisticated. The only way to combat these attacks will be with AI that can automatically respond to attacks before any damage is done.”
A Tripwire study from earlier this month revealed that 93% of security professionals in transportation, manufacturing and utilities fear cyber-attacks shutting down operations, with two-thirds (66%) claiming that it could have catastrophic consequences, such as an explosion.
M6, one of France’s biggest TV channels, hit by ransomware
Unlike The Weather Channel earlier this year, M6 remained on the air.
The M6 Group, the largest France private multimedia group, was the victim of ransomware over the weekend.
The systems at the M6 Group, France’s largest private multimedia group, were infected with the ransomware over the weekend, fortunately, none of the company’s TV and radio channels interrupted the broadcasts.
According to the French newspaper L’Express, the ransomware attack only impacted landlines and e-mail.
“The company’s phone lines and e-mail are unusable, so employees have to use their mobile phones and text messages to communicate,” an internal source told the newspaper. “all the office and management tools are in disruption.”
The company revealed the incident took place on Saturday.
“The M6 Group was the target of a malicious computer attack on Saturday morning, and the quick and efficient intervention of our cyber security experts has helped to ensure the continued security of the Internet.
In April, another broadcast suffered a similar incident, a cyber attack hit the Weather Channel and forced it off the air for at least 90 minutes.
In April 2015, the TV5Monde was hit by a severe cyber attack that compromised broadcasting of transmissions across its medium. The attackers also hijacked the Channel TV5Monde website and social media accounts of the French broadcaster.
Yves Bigot, at the time the director-general of TV5Monde told the BBC that the cyber-attack came close to destroying the network of the French TV and investigation suggested Russia-linked APT28 group.
The post M6 Group, largest France private multimedia group, hit by ransomware attack appeared first on Security Affairs.
When organisations are seeking ISO 27001 compliance, they rely on auditors to give them good advice. Most of the time they’ll do just that – it’s what they’re paid to do. But as with any profession, some auditors are better than others.
How can you tell if your auditor isn’t to be trusted? Keep an eye out for these seven mistakes:
1. They impose their opinions without facts
Why is this bad? ISO 27001 has clear rules on how to implement its requirements. Although there’s room to interpret which course of action is best for you, any decision should be supported by an instruction in the Standard.
Unfortunately, some auditors have preconceived ideas of best strategies and will recommend certain practices regardless of your organisation’s situation. You should only ever follow advice if the auditor can explain how it helps meet a specific compliance requirement.
2. They report findings but don’t provide evidence
Why is this bad? Auditors must always provide proof when highlighting areas of non-compliance. It doesn’t need to be physical evidence; an eye-witness account will do.
The point is the auditor needs something concrete that they can point to, rather than citing a vague violation or general ‘feeling’ of non-compliance.
This helps the organisation understand exactly what the failure is and what it needs to do to fix the issue.
3. They tick off checklists without considering the bigger picture
Why is this bad? Checklists are a great way of quickly assessing whether a list of requirements are met, but what they offer in convenience they lack in in-depth analysis.
Organisations are liable to see that a requirement has been ticked off and assume that it’s ‘mission accomplished’. However, there may still be room to improve your practices, and it might even be the case that your activities aren’t necessary.
A good auditor will use the checklist as a summary at the beginning or end of their audit, with a more detailed assessment in their report, or they’ll use a non-binary system that doesn’t restrict them to stating that a requirement either has or hasn’t been met.
4. They believe the paperwork and ignore the facts
Why is this bad? Any organisation can create policies that demonstrate their commitment to meeting ISO 27001’s requirements, but it doesn’t mean employees actually follow those instructions.
A bad auditor might be satisfied by documentation and a cursory look at whether it’s been implemented. However, auditors must be more rigorous than that.
They shouldn’t be satisfied with just what the organisation wants them to see; they should be digging deeper to check whether the rules are being followed consistently.
5. They feel obliged to find errors
Why is this bad? Auditors sometimes try to stamp their authority by pointing out areas of non-compliance as soon as possible. This isn’t necessarily a bad thing, but it is if they’re exaggerating the scale of a shortcoming to prove a point.
It shouldn’t take long for a good auditor to find genuine faults, as even the best-prepared organisation will have room for improvements.
Auditors should keep this in mind at the start of their assessment, otherwise they’ll end up with an unfairly long list of faults or an inconsistent interpretation of the requirements.
6. They allow cost-cutting to starve the audit
Why is this bad? This mistake occurs more often in internal audits, with organisations acknowledging the need to assess their practices but unable or unwilling to provide the necessary resources.
An underfunded audit will lead to rushed and incomplete results that have little value, and a good auditor will be able to tell if the scale of the project is too big for what’s been budgeted.
7. They use the audit to generate consultancy work
Why is this bad? After completing their assessment, the auditor knows exactly how your organisation operates and where its non-compliances are, so you might be wondering why they’d be a bad fit to consult you on how to correct those mistakes.
In theory, they are a perfect fit. You already have a working relationship and you’ll save time finding a consultant and bringing them up to speed on your organisation’s needs.
Unfortunately, there’s clearly a conflict of interest in this relationship, as you run the risk of allowing the auditor to manipulate their findings to persuade you to use them as a consultant.
It’s therefore generally best if you have a second pair of eyes as your consultant. Picking someone who works at the same organisation might be a good compromise, as it allows you to build on your working relationship with that business.
Good auditing practices
ISO 19011 describes the principles that all auditors of management systems should act upon: integrity, fair presentation, due professional care, confidentiality, independence and an evidence-based approach.
Used diligently, these principles can eliminate bad practices.
You can find out more about what it takes to audit against ISO 27001 by enrolling in one of these training courses:
ISO 27001 external auditor
Our Certified ISO 27001 ISMS Lead Auditor Training Course equips you with the skills to conduct second-party (supplier) and third-party (external and certification) ISMS (information security management system) audits.
Packed with hands-on practical exercises, this five-day course helps you gain the expertise needed to competently manage an ISMS audit programme.
ISO 27001 internal auditor
If you’re looking to audit your own organisation, you’d be better suited to our Certified ISO 27001 ISMS Internal Auditor Training Course.
Designed by IT Governance director Steve Watkins, a technical assessor for UKAS (the United Kingdom Accreditation Service), this two-day course contains an introduction to ISO 27001 and how auditing fits into the compliance process, before explaining how to plan for and execute an internal audit.
A version of this blog was originally published on 18 February 2013.